Troubleshooting AsRock C2750D4I

Introduction

In the past I’ve made a post about the unstability of the AsRock C2750D4I. Guess what, problems aren’t gone with this motherboard.
I am suspecting the RAID controller of the motherboard. When the server experiences heavy load, at least two disks disconnect, bringing down the software RAID.

Troubleshooting

Let’s start by finding out the disk layout of my RAID5.

cat /proc/mdstat
md5 : inactive sde1[3](S) sdh1[1](S) sdf1[0](S) sdd1[4](S)
      11720536064 blocks super 1.2

This shows that my RAID is spread across sde1, sdh1, sdf1 and sdd1. The last error logs from dmesg showed my that sdh1 went down and sdf1 went down before the RAID crash.

So let’s try to find some more information about these two crashed drives.

sudo lshw -c disk

The result will show you a little bit more information about each drive.

  *-disk
       description: ATA Disk
       product: SAMSUNG HD103SI
       physical id: 0.0.0
       bus info: scsi@2:0.0.0
       logical name: /dev/sda
       version: 1AG0
       serial: S20XJDWS700323
       size: 931GiB (1TB)
       capabilities: partitioned partitioned:dos
       configuration: ansiversion=5 sectorsize=512 signature=0007f8a5
  *-disk
       description: ATA Disk
       product: SAMSUNG HD103SI
       physical id: 0.0.0
       bus info: scsi@3:0.0.0
       logical name: /dev/sdb
       version: 1AG0
       serial: S20XJDWZ118279
       size: 931GiB (1TB)
       capabilities: partitioned partitioned:dos
       configuration: ansiversion=5 sectorsize=512 signature=00071895
  *-disk
       description: ATA Disk
       product: KINGSTON SVP200S
       physical id: 0.0.0
       bus info: scsi@5:0.0.0
       logical name: /dev/sdc
       version: 502A
       serial: 50026B7331033DD9
       size: 55GiB (60GB)
       capabilities: partitioned partitioned:dos
       configuration: ansiversion=5 sectorsize=512 signature=91a29a16
  *-disk
       description: ATA Disk
       product: ST3000DM001-1CH1
       vendor: Seagate
       physical id: 0.0.0
       bus info: scsi@6:0.0.0
       logical name: /dev/sdd
       version: CC24
       serial: Z1F27VHM
       size: 2794GiB (3TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=0556e5e5-1e62-42f4-a89c-29813a6f4a18 sectorsize=4096
  *-disk
       description: ATA Disk
       product: Hitachi HDS5C303
       vendor: Hitachi
       physical id: 0.0.0
       bus info: scsi@7:0.0.0
       logical name: /dev/sde
       version: MZ6O
       serial: MCE9215Q0B5MLW
       size: 2794GiB (3TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=ec9054e2-94c3-4d74-8fea-2d34ce0b92ac sectorsize=4096
  *-disk
       description: ATA Disk
       product: Hitachi HDS5C303
       vendor: Hitachi
       physical id: 0.0.0
       bus info: scsi@8:0.0.0
       logical name: /dev/sdf
       version: MZ6O
       serial: MCE9215Q0BHTDV
       size: 2794GiB (3TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=2f6f5a9b-441e-467d-861c-852e2bdefb5e sectorsize=4096
  *-disk
       description: ATA Disk
       product: WDC WD40EFRX-68W
       vendor: Western Digital
       physical id: 0.0.0
       bus info: scsi@9:0.0.0
       logical name: /dev/sdg
       version: 80.0
       serial: WD-WCC4E1653628
       size: 3726GiB (4TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=4ac4a5a9-ccd1-42c5-907a-9272c076a15c sectorsize=4096
  *-disk
       description: ATA Disk
       product: TOSHIBA DT01ACA3
       vendor: Toshiba
       physical id: 0.0.0
       bus info: scsi@10:0.0.0
       logical name: /dev/sdh
       version: MX6O
       serial: 63NZKNRKS
       size: 2794GiB (3TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=24069398-46d0-4b01-9e8e-2530cb9f1cf8 sectorsize=4096

The logical name field shows that my Toshiba (sdh) drive and my Hitachi drive (sdf) where impacted by the last drive/SATA error on the board. This information can be used to physically track the SATA cables to the correct drive.
So now that we have the disk names, we need to find out which controller is the culprit for throwing these errors.

First let’s identify the bus addresses of all SATA controllers available on the motherboard.

sudo lshw -c storage

The connected controller in the bus info field.

  *-storage
       description: SATA controller
       product: 88SE9172 SATA 6Gb/s Controller
       vendor: Marvell Technology Group Ltd.
       physical id: 0
       bus info: pci@0000:04:00.0
       version: 11
       width: 32 bits
       clock: 33MHz
       capabilities: storage pm msi pciexpress ahci_1.0 bus_master cap_list rom
       configuration: driver=ahci latency=0
       resources: irq:55 ioport:c040(size=8) ioport:c030(size=4) ioport:c020(size=8) ioport:c010(size=4) ioport:c000(size=16) memory:df410000-df4101ff memory:df400000-df40ffff
  *-storage
       description: SATA controller
       product: 88SE9230 PCIe SATA 6Gb/s Controller
       vendor: Marvell Technology Group Ltd.
       physical id: 0
       bus info: pci@0000:09:00.0
       version: 11
       width: 32 bits
       clock: 33MHz
       capabilities: storage pm msi pciexpress ahci_1.0 bus_master cap_list rom
       configuration: driver=ahci latency=0
       resources: irq:56 ioport:d050(size=8) ioport:d040(size=4) ioport:d030(size=8) ioport:d020(size=4) ioport:d000(size=32) memory:df610000-df6107ff memory:df600000-df60ffff
  *-storage:0
       description: SATA controller
       product: Atom processor C2000 AHCI SATA2 Controller
       vendor: Intel Corporation
       physical id: 17
       bus info: pci@0000:00:17.0
       version: 02
       width: 32 bits
       clock: 66MHz
       capabilities: storage msi pm ahci_1.0 bus_master cap_list
       configuration: driver=ahci latency=0
       resources: irq:48 ioport:e0d0(size=8) ioport:e0c0(size=4) ioport:e0b0(size=8) ioport:e0a0(size=4) ioport:e040(size=32) memory:df762000-df7627ff
  *-storage:1
       description: SATA controller
       product: Atom processor C2000 AHCI SATA3 Controller
       vendor: Intel Corporation
       physical id: 18
       bus info: pci@0000:00:18.0
       version: 02
       width: 32 bits
       clock: 66MHz
       capabilities: storage msi pm ahci_1.0 bus_master cap_list
       configuration: driver=ahci latency=0
       resources: irq:54 ioport:e090(size=8) ioport:e080(size=4) ioport:e070(size=8) ioport:e060(size=4) ioport:e020(size=32) memory:df761000-df7617ff

Now for each driven we can search the corresponding SATA controller address, this is listed as the pci values found above.

sudo udevadm info -q all -n /dev/sde | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:03.0/0000:02:00.0/0000:03:01.0/0000:04:00.0/ata8/host7/target7:0:0/7:0:0:0/block/sde
 
sudo udevadm info -q all -n /dev/sdd | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:03.0/0000:02:00.0/0000:03:01.0/0000:04:00.0/ata7/host6/target6:0:0/6:0:0:0/block/sdd
 
sudo udevadm info -q all -n /dev/sdf | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:04.0/0000:09:00.0/ata9/host8/target8:0:0/8:0:0:0/block/sdf
 
sudo udevadm info -q all -n /dev/sdh | grep DEVPATH
E: DEVPATH=/devices/pci0000:00/0000:00:04.0/0000:09:00.0/ata11/host10/target10:0:0/10:0:0:0/block/sdh

The last number before /ata is the device which it is connected to. So this means that sde and sdd are connected to an ATA device at 0000:04:00.0 which equals to the Marvell 88SE9172 SATA 6Gb/s Controller.
The drives sdf and sdh are connected to the ATA device at 0000:09:00.0, which translates to the Marvell 88SE9230 PCIe SATA 6Gb/s Controller.

Which is the asshole throwing me errors.

Now with these information we can unplug the disks, from the controller throwing the errors. The location of the Marvell 88SE9230 is explained in the manual at http://www.asrockrack.com/general/productdetail.asp?Model=C2750D4I#Manual. You can verify the physical existence on the board, together with the disk names found previously.

So I rerouted all disks (I prefer 3Gbps SATA above a dysfunctional 6Gbps any day) and since then the NAS has been stable.

Read More

Google App Engine – goapp: ‘C:\Program’ is not recognized as an internal or external command

Today (24/07/2014) I installed the Google Apps components from the Google SDK installer. However when I try to run my goapps application with the command: ‘goapp serve myapp/’

I am receiving an error: ‘C:\Program’ is not recognized as an internal or external command. The problem here is that the ‘goapp.bat’ file tries to access an executable file in the ‘C:\Program Files\Google\Cloud SDK\…’ folder. Because Windows is (still) super terrible at handling spaces in folder names in scripts, it fails.

The solution is to go to the ‘C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine’ folder and edit the ‘goapp.bat’ file.
At the bottom of the file you will see:

:: Note that %* can not be used with shift.
%GOROOT%\bin\%EXENAME% %1 %2 %3 %4 %5 %6 %7 %8 %9

Now add some quotes to this last line and your problem should be fixed.

:: Note that %* can not be used with shift.
"%GOROOT%\bin\%EXENAME%" %1 %2 %3 %4 %5 %6 %7 %8 %9

Once these changes are saved, go to the ‘C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\’ folder. There’s ‘goapp.cmd’ file that gets added to the Windows path. Rename this file to ‘goapp.bck’ and copy your ‘goapp.bat file’.
In this last file change the last line again to:

:: Note that %* can not be used with shift.
"%GOROOT%\..\..\platform\google_appengine\goapp" %1 %2 %3 %4 %5 %6 %7 %8 %9

That’s it. Ugly, but it works…

Original Github issue: windows 7 C:/Program Files/… #688

Read More

NAS – Part 6: Health checks mdadm

Introduction

This post builds on part 2: NAS – Part 2: Software and services. It’s a detection script to see if your RAID is failing. In the past I’ve had my fair share of failed RAID configurations.

I do know the package mdadm can send alerts, however this small script which can be extended to detect specific changes in RAID/system configuration without using the built in reporting.

Implementation

First let’s start by installing mailutils. This package is needed

sudo apt-get install mailutils

Next up is the ‘ssmtp’ package. This package will allow you to send a mail.

sudo apt-get install ssmtp

Create the ssmtp directory (if it doesn’t exists).

sudo mkdir /etc/ssmtp/

And create an ssmtp.conf file.

sudo nano /etc/ssmtp/ssmtp.conf

This ssmtp.conf requires a username(author) and password(authpass). Also a mail hub (smtp, example: mailhub=smtp.gmail.com:587)

AuthUser=<your-email-adres>
AuthPass=<password>
FromLineOverride=YES
mailhub=<smtp-mailserver>
UseSTARTTLS=YES

To test your configuration you can try to send a test mail. Just change ‘email@mail.com’ to your email adress.

echo "This is a test" | mail -s "Test" email@mail.com

If everything works you are ready to create your cron job script. (I will create this script in my user directory, however you can create this wherever you want.)

cd ~
nano health-mdstat.sh

The underscore of ‘cat /proc/mdstat’ is used by mdadm to notify you of any failing RAID disks. So I’ll be checking for this character.

#!/bin/bash
SUBJECT="---RAID IN DEGRADED STATE---"
EMAIL="<target email>"
FROM="<from email>"
EMAILMESSAGE="/tmp/cron-email"
 
cat /proc/mdstat > /tmp/cron-email
 
if grep -q "_" "$EMAILMESSAGE"; then
   mail -aFrom:$FROM -s "$SUBJECT" "$EMAIL" < $EMAILMESSAGE
fi

Let’s assign execute rights to our script.

sudo chmod +x ./health-mdstat.sh

That’s it! Now assign this to a cron job. I assigned my cron job to run daily.
Also happy scripting (when extending this script).

Read More

NAS – Part 5: Afraid.org DNS

Introduction

The final step to my NAS is keeping my dynamic IP bound to a DNS host. I am using http://afraid.org/ to manage and handle the dynamic DNS.

This script is adapted and based on th script found at: http://adambuchanan.me/post/25473551700/dynamic-dns-with-bash-afraid-org.

Script

This following script will change all hosts assigned to your account to the current IP you are running this script from.

#!/bin/bash
 
#insert SHA-1 hash here (format): username|password
hash=""
 
info_url="http://freedns.afraid.org/api/?action=getdyndns&sha=$hash"
 
echo "Calling $info_url ..."
 
ip=$(dig @208.67.222.220 myip.opendns.com | grep "myip.opendns.com." | grep "0" | awk '{ print $5} ')
echo "Current IP is: $ip"
 
# get the current dns settings...
for each in `curl -s "$info_url"`
do
        domain=`echo "$each" | cut -d"|" -f1`
        dns_ip=`echo "$each" | cut -d"|" -f2`
 
        update_url=`echo "$each" | cut -d"|" -f3`
 
        echo "$domain ..."
        if [ "$ip" != "$dns_ip" ]
        then
                echo "Updating $dns_ip =>$ip ..."
                curl "$update_url" >> log
        fi
        echo "OK"
done

Now run this with a job in crontab to update your afraid.org DNS.

crontab -e

Read More

Windows 3.11 with qemu-kvm – Part 1: Xubuntu

Introduction

For my little Windows 3.11 PaaS system I fell on a dead track with VirtualBox. So I’ve been researching another way to virtualize Windows 3.11 and I found qemu. Below is my little take at emulating Windows 3.11.

Installing qemu-kvm

Installing is pretty easy, just grab all needed packages. I am using the package ‘virt-manager’ as a GUI frontend.

sudo apt-get install qemu qemu-kvm libvirt-bin bridge-utils virt-manager

Next up is to add your current user to the correct groups. This ensures that your virtual machines can be run with your current user.

sudo adduser `id -un` libvirtd
sudo adduser `id -un` kvm

Now to check if everything is ok run virsh. This should return an empty list of virtual machines.

virsh -c qemu:///system list

If you get following error, then you need to change the permissions of your ‘libvirt-sock’ file.

error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
sudo chown legacy:libvirtd /var/run/libvirt/libvirt-sock

Next up is create a virtual machine. For this part I will be using Windows 3.11. However you could use any operating system.

Before we can start creating the virtual machine, I like to create my virtual disks myself. In my template I am using a C:\ drive of 100MB for the system and a data disk of 200MB.

qemu-img create -f qcow2 ~/qemu/template/boot.img 100M
qemu-img create -f qcow2 ~/qemu/template/data.img 200M

Next up, go to your menu and select the ‘Virtual Machine Manager’. This piece of software is a GUI frontend.
qemu-1

In this frontend GUI press the upper left icon to start the wizard to create a new virtual machine.

Give the virtual machine a name, in my case: TEMPLATE. And select ‘Local install media’.
qemu-2

Both types I’ll leave as ‘Generic’. Also select the install image. My windows 3.11 source is an ISO file.
qemu-3

Select the amount of memory and CPU. In the virtual machine manager there is a little bug that won’t allow you to assign less than 50MB. But this shouldn’t be a problem, we’ll fix this later. As for CPU, use one.
et/wp-content/uploads/2014/05/qemu-4.png”>qemu-4

Press the ‘Select managed…’ option here and navigate to the disks you’ve made with the ‘qemu-img’ command. The type will be wrong (raw) but we will fix this later too.
qemu-5

Last step of the wizard. Here by default the hypervisor will be ‘kvm’. My recent findings have found this to cause some stability issues with Windows 3.11. Select qemu instead. As architecture select i686. This is your default 32-bit architecture.
qemu-6

So that’s it. Create the image and let’s continue. Once your virtual machine is created select the blue ‘i’ button to edit the machine a little bit further.

Press the ‘Memory’ tab and assign 32MB. 32 should be enough for Windows 3.11.
qemu-11

Next go to ‘Boot options’ and activate floppy and hard drive. The floppy should go first before we boot from hard drive.
qemu-12

Once this is done, fix your disk one. Select ‘qcow2′ as type and make sure the disk bus is ‘IDE’.
qemu-13

After this assign the second hard drive. Press the ‘Add hardware button’ below and select ‘Storage’. From this menu assign the existing image as disk two.
qemu-14

Last step is the floppy drive. Add a new storage drive and select floppy from the dropdown list and press Finish.
qemu-15

That’s it now your virtual machine is configured to run.

Installing Windows 3.11 / MS-DOS

Next step would be to install the operating system. From the settings page you can connect and disconnect floppies to install your operating system. Press the ‘Disconnect’ button to disconnect the floppy image and press ‘Connect’ to reconnect an image.
qemu-22

Here we go, one fresh MS-DOS 6.22 install.
qemu-21

I won’t explain the other details of installing Windows 3.11, as this post will only cover qemu-kvm. However a little hint: you will need the tools listed on http://www.scampers.org/steve/vmware/

Managing with virtsh

Managing a running virtual machine is very easy. The tool to use for this is called ‘virsh’.

To suspend a machine use ‘virsh suspend’ followed by your virtual machine name. (In my case ‘TEMPLATE’). A suspend will keep your machine in RAM. However it won’t be using any other system resources (except disk space).

virsh suspend TEMPLATE

To resume a suspended state, use ‘resume’.

virsh resume TEMPLATE

To fully dump your running virtual machine use save. This will create an image file of your running config and will unload any RAM assigned to this machine.

virsh save TEMPLATE ~/qemu/template/suspend

First time you will need to change the rights of your suspend image as by default it will be owned by ‘root’. If you try to resume a suspended machine owned by root you will get a permission denied error.

sudo chown `id -un` ~/qemu/template/suspend

To resume a saved virtual machine you can use the ‘restore’ command followed by your image file.

virsh restore ~/qemu/template/suspend

To view the stats of your virtual machine you can use following command:

virsh -c qemu:///system list

It will show the state of your machines. A machine which has been saved to disk won’t show up in this table though.

 Id    Name                           State
----------------------------------------------------
 23    TEMPLATE                       running

More information about managing your virtual machine with virsh can be found at: http://www.centos.org/docs/5/html/5.2/Virtualization/chap-Virtualization-Managing_guests_with_virsh.html

Changing media with virsh

To view all your media assigned to an image you can use the ‘domblklist’ command.

virsh domblklist TEMPLATE

This will output a table showing you the assigned disks.

Target     Source
------------------------------------------------
hda        /home/legacy/qemu/template/boot.img
hdb        /home/legacy/qemu/template/data.img
hdc        /home/legacy/qemu/resources/windows.iso
fda        /dev/sdb

Example: to change the floppy with the command line use ‘change-media’. First disconnect the floppy drive.

virsh change-media TEMPLATE fda --eject

Verify that it has been disconnected.

virsh domblklist TEMPLATE
Target     Source
------------------------------------------------
hda        /home/legacy/qemu/template/boot.img
hdb        /home/legacy/qemu/template/data.img
hdc        /home/legacy/qemu/resources/windows.iso
fda        -

Now insert a new floppy image.

virsh change-media TEMPLATE fda ~/qemu/resources/tools.img --insert

There we go, the floppy is now usable in the virtual machine.

virsh domblklist TEMPLATE
Target     Source
------------------------------------------------
hda        /home/legacy/qemu/template/boot.img
hdb        /home/legacy/qemu/template/data.img
hdc        /home/legacy/qemu/resources/windows.iso
fda        /home/legacy/qemu/resources/tools.img

This example used a floppy image, however it is also possible to swap out disk drives and CD-ROM drives too.

That’s about it for the Xubuntu part. Next topic will probably cover this in an AWS – Amazon EC2 instance.

Read More

NAS – unstable C2750D4I

When configuring my NAS I noticed that the ASrock C2750D4I behaves rather sloppy. Uptime never reached more than 24 hours.
Online I can find other people who are experiencing the same issues with this board: http://forums.tweaktown.com/asrock/56730-c2750d4i-stability-problems-2.html

This is how I made it stable (been running 7 days now without reboots)

NIC drivers

A quick glance at the Intel website shows an update for the NIC:
https://downloadcenter.intel.com/SearchResult.aspx?lang=eng&ProductFamily=Ethernet+Components&ProductLine=Ethernet+Controllers&ProductProduct=Intel%C2%AE+Ethernet+Controller+I210+Series
Let’s install it:

cd ~
wget http://downloadmirror.intel.com/13663/eng/igb-5.1.2.tar.gz
tar xvf igb-5.1.2.tar.gz
cd ~/igb-5.1.2/src
sudo make install

Edit the modules file and add ‘igb’.

sudo nano /etc/modules
igb

Let’s check if it loads.

sudo modprobe igb

Reboot the machine and verify if the new drivers are loaded.

sudo reboot
modinfo igb

Output:

filename:       /lib/modules/3.13.0-24-generic/kernel/drivers/net/igb/igb.ko
version:        5.1.2
license:        GPL
description:    Intel(R) Gigabit Ethernet Network Driver
author:         Intel Corporation, 

Last step cleanup the files.

sudo rm -rf ~/igb-5.1.2
sudo rm ~/igb-5.1.2.tar.gz

Disable Intel Speedstep

Disable your Intel Speedstep and C-Bit in the BIOS. The manual states that Intel Speedstep could ‘make your system unstable’. On this board, yes it does.

SATA cables + Boot disk to Intel controller

The manual recommended the use of the Intel RAID controller for OS disks. (Which I didn’t) So I swapped the SATA cable with a more expensive one (found some postings of people reporting e better stability using better SATA cables), and moved the boot disk to the Intel SATA controller.

These steps solved my instability with this board. Whilst on paper this board is the most awesome buy you could do (passive cooled, 12 SATA ports, quad core Atom, 20 Watt). In reality it’s as picky as a spoiled toddler. Definitely a not buy. At the price of ~€350 this is quite an expensive pain in the ass.

However, is there a comparable product?

Read More

AWS – Using Amazon as frontend for your home server

Introduction

Owncloud is pretty awesome, it provides me with my files everywhere I want on the world. However sometimes accessing my files is rather trivial. Think in terms of hotel lobbies, public access points. Sometimes there are some real restrictions on ports being used. By default my ISP blocks all server traffic below 1024, which is in my opinion a rather rude. I want my files! Luckily we can use the Amazon t1.micro (free tier) to provide a solution to this.

Preparing the Amazon image

So select a free tier Amazon t1.micro. This should be free the first year so no worries. As for configuration. Open the SSL and HTTPS port. Once this instance is running login to the instance as ‘ec2-user’ with your certificate file.

Installing HAProxy

Before we can compile we need to install the build tools.

sudo yum install -y make gcc openssl-devel pcre-devel pcre-static

Now download HAProxy and build it.

cd ~
wget http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev24.tar.gz
tar -xzf haproxy-1.5-dev24.tar.gz
cd haproxy-1.5-dev24
 
make clean
make USE_OPENSSL=1 TARGET=linux26 USE_STATIC_PCRE=1
sudo make install

By default HAProxy is installed in the /usr/local folder, create a logical link or change the variable from the make.

sudo ln -s /usr/local/sbin/haproxy /usr/sbin/haproxy

Because we installed from source, there is no service script. So let’s create one.

sudo nano /etc/init.d/haproxy
#!/bin/sh
#
# haproxy
#
# chkconfig:   - 85 15
# description:  HAProxy is a free, very fast and reliable solution \
#               offering high availability, load balancing, and \
#               proxying for TCP and  HTTP-based applications
# processname: haproxy
# config:      /etc/haproxy/haproxy.cfg
# pidfile:     /var/run/haproxy.pid
 
# Source function library.
. /etc/rc.d/init.d/functions
 
# Source networking configuration.
. /etc/sysconfig/network
 
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
 
exec="/usr/sbin/haproxy"
prog=$(basename $exec)
 
[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
 
lockfile=/var/lock/subsys/haproxy
 
check() {
    $exec -c -V -f /etc/$prog/$prog.cfg
}
 
start() {
    $exec -c -q -f /etc/$prog/$prog.cfg
    if [ $? -ne 0 ]; then
        echo "Errors in configuration file, check with $prog check."
        return 1
    fi
 
    echo -n $"Starting $prog: "
    # start it up here, usually something like "daemon $exec"
    daemon $exec -D -f /etc/$prog/$prog.cfg -p /var/run/$prog.pid
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}
 
stop() {
    echo -n $"Stopping $prog: "
    # stop it here, often "killproc $prog"
    killproc $prog
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}
 
restart() {
    $exec -c -q -f /etc/$prog/$prog.cfg
    if [ $? -ne 0 ]; then
        echo "Errors in configuration file, check with $prog check."
        return 1
    fi
    stop
    start
}
 
reload() {
    $exec -c -q -f /etc/$prog/$prog.cfg
    if [ $? -ne 0 ]; then
        echo "Errors in configuration file, check with $prog check."
        return 1
    fi
    echo -n $"Reloading $prog: "
    $exec -D -f /etc/$prog/$prog.cfg -p /var/run/$prog.pid -sf $(cat /var/run/$prog.pid)
    retval=$?
    echo
    return $retval
}
 
force_reload() {
    restart
}
 
fdr_status() {
    status $prog
}
 
case "$1" in
    start|stop|restart|reload)
        $1
        ;;
    force-reload)
        force_reload
        ;;
    check)
        check
        ;;
    status)
        fdr_status
        ;;
    condrestart|try-restart)
        [ ! -f $lockfile ] || restart
        ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|try-restart|reload|force-reload}"
        exit 2
esac

And assign execute rights.

sudo chmod +x /etc/init.d/haproxy

Configuration of HAProxy

now to configure HAProxy create the config file.

sudo mkdir -p /etc/haproxy
sudo nano /etc/haproxy/haproxy.cfg

To forward an HTTPS port use the mode TCP. This example forwards from the IP 255.255.255.255 (example). It proxifies (or tunnels) port 22443 to 443 and 22222 to 2222.

global
       daemon
       maxconn 10000
 
defaults
       timeout connect 500s
       timeout client 5000s
       timeout server 1h
 
frontend https_proxy
        mode tcp
        bind *:443
        default_backend https_servers
 
frontend ssh_proxy
        bind *:2222
        mode tcp
        default_backend ssh_servers
 
backend ssh_servers
        mode tcp
        server ssh 255.255.255.255:22222
 
backend https_servers
        mode tcp
        server server1 255.255.255.255:22443

This should do it. Your SSH and HTTPS connection are routed trough Amazon.

As for Owncloud (version 6.x), you will need to add your domain (example: ec2-255-255-255-255.eu-west-1.compute.amazonaws.com) to the config/config.php file:

  'trusted_domains' =>
  array (
    0 => '...........',
  ),

Read More

NAS – Part 4: Owncloud

Introduction

Owncloud is simply amazing. It’s like a Dropbox at home.
For my NAS I will be running this program in an instance in a virtual machine. This is done because I’ll be opening this machine to the outside of the world. Also it’s much easier to backup and dispose.

The VMWare instance

Let’s start with configuring the VMWare instance. I’ll be using the Ubuntu LTS server edition for this instance, as it uses less system resources than a full desktop environment.

Configure the VMWare instance according to the following specifications:
- CPU: 2 virtual CPU’s (1 thread each)
- RAM: 512 MB
- Disk: 6GB
- Operating System: Ubuntu 14.04 LTS

Whilst installing I used ‘automatic updates’ so I don’t have to manage this VMWare instance and I also installed OpenSSH server during the install procedure.

Installing Owncloud

Start by installing all needed packages and dependencies for Owncloud. Also enable the Apache2 headers and rewrite module.

sudo apt-get install apache2 php5 php5-gd php-xml-parser php5-intl php5-sqlite php5-mysql smbclient curl libcurl3 php5-curl php5-json php-apc
sudo a2enmod rewrite
sudo a2enmod headers
sudo service apache2 restart

Installing Owncloud is quite easy, just download the package, extract and fire up a web browser.

cd ~
wget http://download.owncloud.org/community/owncloud-6.0.0a.tar.bz2
tar -xjvf owncloud-6.0.0a.tar.bz2
sudo cp -r owncloud /var/www/
rm -rf ~/owncloud
rm -rf ~/owncloud-6.0.0a.tar.bz2

Fix all rights in the ‘/var/www’ folder:

sudo chown -R www-data:www-data /var/www/

That’s about it, now you can follow the http:///owncloud link and configure your Owncloud. You will need a MySQL database for this application.

Optional: Moving Owncloud to RAID1 share

I prefer to move my data and Owncloud to a network share which is backed by a RAID1 configuration. In case one of my automatic updates shits the server.

Create a mount point for your data. I’ll be using \\192.168.1.10\owncloud as share. The username will be ‘www-data’. As Apache2 uses this username to read and write.
Create the account on the host system and create the share directory.

sudo smbpasswd -a www-data
 
sudo mkdir -p /media/raid1/owncloud
sudo chown -R www-data:www-data /media/raid1/owncloud/

Add the share to samba.

sudo nano /etc/samba/smb.conf
[owncloud]
comment = Raid 1 secure backup storage
path = /media/raid1/owncloud
valid users = www-data
public = no
browseable = no
writable = yes

On the Owncloud instance install the ‘cifs-utils’ package.

sudo apt-get install cifs-utils

Create the folder to mount and mount the network share.

sudo mkdir -p /mnt/network/tmp
sudo mount -t cifs -o user=www-data,password=password //192.168.1.10/owncloud /mnt/network/tmp

Test your share and move all data.

sudo mv /var/www/owncloud/* /mnt/network/tmp/

Now for the fstab file, create a credentials file.

sudo nano /home/owncloud/.cloudcredentials

Add the username and password to the credentials file.

username=www-data
password=password

Restrict access to this credentials file.

sudo chmod 600 /home/owncloud/.cloudcredentials

Add the mount to the ‘/etc/fstab’ file.

sudo nano /etc/fstab
//192.168.1.10/owncloud /var/www/owncloud cifs credentials=/home/owncloud/.cloudcredentials,iocharset=utf8,sec=ntlm 0 0 

That’s it, happy file synchronizing.

Read More

LegacyNET – Introduction

Introduction

Just an introduction to one of my side projects.

One late evening I decided to get creative for a while. So I came up with the design for a semi-PaaS Windows 3.11 system.
Why?
- Because it’s fun. I’ve always loved legacy systems because of their simplicity. Simplicity which allows me to grasp the history of complex current generation systems. The main purpose would be to see if I can meld old technology together with new technology.
- It hasn’t been done before. At least not that I know of. And if I wanted to create an up-to-date system/design which would serve a business purpose, I would prefer to get paid for doing this. This is my spare time.
- Gaming. You have to admit it, old-school games are fun. Anyone can download and install a DosBox and play Warcraft 2 games offline. However netplay on a server would be awesome.

Design

This is the initial design I’ve had in mind, it lacks quite a lot of advanced features. The goal is to use as much out of the box components as possible. I don’t want to write my own servers or other components as this will take a huge amount of time and will likely not scale at all.

LegacyNET

Front-end (http://legacy.enira.net)

This front-end GUI utilizes a regular HTTP/Apache2 web server to serve a graphical interface for users to:
a) Manage their account and credentials
b) Manage ‘friendly nodes’ which allow inter access network
c) Manage system messages and messages between users
d) Manage their virtual instance (reset/start/stop)

RDP gateway

This gateway is an Amazon EC2 instance (t1.micro) configured with HAProxy to proxy RDP connections to each instance and shield the node server from other external traffic. Each instance will receive an RDP port 3500 + n to connect.

Node 0

The actual physical system. In my case this will be an old Dell XPS M1530 which should provide enough resources for the initial setup.
Each VirtualBox instance will be configured with (based on a template):
- 32 MB RAM
- 100 MB of system storage (drive C:)
- 200 MB of user storage (drive D:)
- 800 x 640 resolution RDP connection
- Private IP address 192.168.x.100 + n connected trough an internet gateway (192.168.x.1)
- Windows 3.11 with networking capacities

A reset of an instance wipes drive C: (and repairs it from the template) but should keep all data on the D: drive.

Node Manager

Installed on each node, this manager allows JSON calls between the front-end component and physical state of the system. It will allow the GUI to send messages concerning:
a) System utilization
b) Instance management

Communication between node and front-end should be done using HTTPS and will utilize Apache2 to server HTTPS traffic.

Feasibility study

Study 1: RDP connection

Goal: Complete an RDP connection trough the internet and see if the performance of the RDP connection is enough for a Windows 3.11 instance running at 800×640. This RDP connection should use the VirtualBox RDP capabilities (found in the extra bundle).
Level: Critical
Status: Completed
Results: All objectives have been met.

Study 2: Clone template with VirtualBox

Goal: This test should create and maintain a new instance created from a previous Windows 3.11 instance (called template).
Level: Critical
Status: Ongoing

Study 3: Separate hosts on virtual LAN segment

Goal: This feasibility study should test if there is no traffic possible between each host configured in an internal networking mode. Preferably by using iptables and/ or Coyote Linux for routing network traffic.
Level: High
Status: Ongoing

Final notes

This system is far from perfect, and a lot of work needs to be done. I still need to confirm two feasibility statuses. If study 2 fails, this project will be scrapped.
This a project which is done entirely in my spare time the release date will be when it’s done.

Read More

NAS – Part 3: Mediacenter setup

Indroduction

In part two I’ve discussed the basic services for my NAS. This post will discuss the building of a media center. At the time of writing there are two dominant media players: Plex and XBMC. For my NAS I’ll be using Plex. See: http://www.maximumpc.com/article/features/xbmc_vs_plex2013

Plex

First service up is Plex. Plex needs the component ‘avahi-daemon’. Normally this should be installed on your system. For those who don’t have it:

sudo apt-get install avahi-daemon

Next install Plex. Check for any updates on the Plex website: https://plex.tv/downloads and download the Plex Debian package.

cd ~/Downloads 
wget -c http://downloads.plexapp.com/plex-media-server/0.9.9.7.429-f80a8d6/plexmediaserver_0.9.9.7.429-f80a8d6_amd64.deb
sudo dpkg -i plexmediaserver_0.9.9.7.429-f80a8d6_amd64.deb

That’s it, now login to your plex environment with http://:32400/manage. You alse need to have a Plex account but once you do you can add libraries to your Plex server. I recommend the Ouya Plex client or RasPlex to connect.

Sickbeard

Almost finished now, for my series I like to use Sickbeard. It’s an awesome tool that manages to capture meta data for series. It shows the quality of your series on your home NAS and the completeness.

Before we can start with Sickbeard, you need the ‘python-cheetah’ module. This module is needed by Sickbeard.

sudo apt-get install python-cheetah

Let’s download the tarball (yet again, I don’t like Git for installations).

cd ~/Downloads
wget --no-check-certificate https://github.com/midgetspy/Sick-Beard/tarball/master
tar -xzvf master

Once everything is unpacked, create a directory to run Sickbeard and move your files to it. The number ‘f64b94f’ could be different in your installation. (Depends on Git check-ins)

mkdir ~/SickBeard
mv /home/nas/Downloads/midgetspy-Sick-Beard-f64b94f/* /home/nas/SickBeard/

Now test the install by running the Sickbeard python script.

cd /home/nas/SickBeard/
python SickBeard.py

If Sickbeard launches without a problem, then you can add it to the startup of your server.

Autostarting is fairly easy. Just copy the ‘init.ubuntu’ file form the Sickbeard directory.

sudo cp ~/SickBeard/init.ubuntu /etc/init.d/sickbeard
sudo chmod +x /etc/init.d/sickbeard
sudo update-rc.d sickbeard defaults

This startup script needs to know which user it can run as and also the directory. These variables need to be added to the ‘/etc/default/sickbeard’ file.

sudo nano /etc/default/sickbeard
SB_USER=nas
SB_HOME=/home/nas/SickBeard
SB_DATA=/home/nas/SickBeard/sickbeard_data
SB_PIDFILE=/home/nas/SickBeard/pid

Now we can start the Sickbeard service.

sudo service sickbeard start

Sickbeard runs at http://:8080, from there you can configure your Sickbeard installation.

Transmission

Last service up is Transmission. Any good home NAS must have this. It’s the most awesome remote tool to schedule torrents.

By default it should be installed. For those who don’t have it:

sudo apt-get install transmission

To start Transmission I created a startup script that allows me to run this service once. As with the RDP environment there is a chance that Transmission gets started twice due to the session creation in RDP. A simple hack is to create a script that avoids this.

sudo mkdir -p /home/nas/Scripts/start
sudo nano ~/Scripts/start/runonce.sh
#!/bin/sh
 
for var in "$@"
do
        SERVICE="$SERVICE $var"
done
 
RESULT=`ps -aux | grep -i ${SERVICE} | grep -v grep | grep -v /bin/sh`
 
echo Result: $RESULT
 
if [ "${RESULT:-null}" = null ]; then
        echo "not running... starting $SERVICE"
        $SERVICE
else
        echo "running"
fi

And add ‘transmission-gtk’ to the XFCE session:

/home/nas/Scripts/start/runonce.sh /usr/bin/transmission-gtk

So that’s about it for part two. Next parts will handle Owncloud and Subsonic.

Read More