Monthly Archives: February 2014

Backup storage: CentOS & QNAP & iSCSI

I’m using a CentOS 6 box as backup server running BackupPC. Until a couple days ago I had a Thecus N7700PRO NAS with 4x 3 TB discs configured as RAID5 which was accessed via iSCSI from the CentOS box. Then, 2 harddrives died at the same time (or not, at least that’s what the Thecus reported at that point) and Thecus support said the system cannot see drive 1 (one of the apparently failed ones) anymore, although the Thecus still showed it, and I should try to install a new harddrive in place of drive 1 and try to duplicate all data from drive 2 to the new drive 1 using “dd”. When I rebooted the Thecus hoping that it might just magically work again, the whole RAID5 was gone. Like it never existed. I said screw it and bought a QNAP 19″ 1U TS-412U off of eBay (new) for about 580 EUR. Along that four 4 TB enterprise discs from different vendors according to the supported harddrive compatibility list from QNAP. Here are the required steps to get the backup server back in business:

1. Insert harddrives, power on, initial setup, QNAP will get a IP via DHCP instead of the 169.254.100.100 that’s mentioned in the quick start guide.

2. Download latest firmware from here and upload when prompted to.

3. Set up RAID5 or whatever you prefer. You can only choose ext3 or ext4 but don’t get confused by it, it’s just the lower level that QNAP uses and on top we will later build our own XFS filesystem and use LVM.

4. Configured iSCSI target & LUN according to this QNAP link (sorry, in German only, but the pictures should be sufficient to figure it out), but I chose Instant Allocation. You may have to wait for the RAID5 to get built before you can choose the LUN location. Also, after you created the target it may take a while before the target becomes available (you can check the progress under “iSCSI Target List” – “Alias” – “id:0 …” – “Status”).

5. The QNAP is directly connected to eth1 on the CentOS box without a switch. eth1 has IP 192.168.1.1, the QNAP has 192.168.1.100.

6. On the CentOS box, delete the old Thecus from the iSCSI initiator database:
iscsiadm -m node -o delete

7. Make sure node startup is set to automatic in /etc/iscsi/iscsid.conf:
node.startup = automatic

8. Discover the new QNAP:
iscsiadm -m discovery -t sendtargets -p 192.168.1.100:3260

9. Make sure it’s there:
iscsiadm -m node
This should output something like:
192.168.1.100:3260,1 iqn.2004-04.com.qnap:ts-412u:iscsi.raid5.c8af3e

10. If you want, reboot the box and confirm that it’s still there when you execute iscsiadm -m node after the reboot.

11. In dmesg something like this should have popped up now:
scsi5 : iSCSI Initiator over TCP/IP
scsi 5:0:0:0: Direct-Access QNAP iSCSI Storage 3.1 PQ: 0 ANSI: 5
sd 5:0:0:0: Attached scsi generic sg1 type 0
sd 5:0:0:0: [sdb] 23017373696 512-byte logical blocks: (11.7 TB/10.7 TiB)
sd 5:0:0:0: [sdb] Write Protect is off
sd 5:0:0:0: [sdb] Mode Sense: 2f 00 00 00
sd 5:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sdb: unknown partition table
sd 5:0:0:0: [sdb] Attached SCSI disk

12. I like LVM, it’s not necessary, but maybe it will be of use later on. Check with pvdisplay that /dev/sdb is there (maybe reboot or run pvscan if it’s not):
— Physical volume —
PV Name /dev/sdb

13. I created PV, VG and LV and assigned 100% of the available space to the LV:
pvcreate /dev/sdb
vgcreate data /dev/sdb
lvcreate --name rz-nas01 -l100%FREE data

14. Now, you should have /dev/mapper/data-rz--nas01 or /dev/data/rz-nas01 which are just links to a /dev/dm-x device. If you don’t, you can try restarting /etc/init.d/lvm2-monitor or just reboot. Run lvdisplay to check the LV is there and “LV Status” is “available”.

15. Create a filesystem on the LV, I chose XFS:
mkfs.xfs /dev/mapper/data-rz--nas01
This could take a few moments.

16. If you want the storage to get mounted automatically on boot, use something like this in /etc/fstab:
/dev/mapper/data-rz--nas01 /var/lib/BackupPC xfs defaults,_netdev 0 0

17. Mount it with mount /dev/mapper/data-rz--nas01 /mnt if you want to test first, otherwise you can just do mount -a.

18. Done!