1. Configuring SCST iSCSI Target on Debian Linux (Wheezy)

    Sun 01 February 2015

    My goal is to export ZFS zvol volumes through iSCSI to other machines. The platform I'm using is Debian Wheezy.

    There are three iSCSI target solutions available for Linux:

    1. LIO
    2. IET
    3. SCST

    I've briefly played with LIO but the targetcli tool is interactive only. If you want to automate and use scripts, you need to learn the Python API. I wonder what's wrong with a plain old text-based configuration file.

    iscsitarget or IET is broken on Debian Wheezy. If you just 'apt-get install iscsitarget', the iSCSI service will just crash as soon as you connect to it. This has been the case for years. I wonder why they don't just drop this package. It is true that you can manually download the "latest" version of IET, but don't bother, it seems abandoned. The latest release stems from 2010.

    It seems that SCST is at least maintained and uses plain old text-based configuration files. So it has that going for it, which is nice. SCST does not require kernel patches to run. But particularly a patch regarding "CONFIG_TCP_ZERO_COPY_TRANSFER_COMPLETION_NOTIFICATION" is said to improve performance.

    To use full power of TCP zero-copy transmit functions, especially
    dealing with user space supplied via scst_user module memory, iSCSI-SCST
    needs to be notified when Linux networking finished data transmission.
    For that you should enable CONFIG_TCP_ZERO_COPY_TRANSFER_COMPLETION_NOTIFICATION
    kernel config option. This is highly recommended, but not required.
    Basically, iSCSI-SCST works fine with an unpatched Linux kernel with the
    same or better speed as other open source iSCSI targets, including IET,
    but if you want even better performance you have to patch and rebuild
    the kernel.
    

    So in general, patching your kernel is not always required, but an example will be given anyway.

    Getting the source

    cd /usr/src
    

    We need the following files:

    wget http://heanet.dl.sourceforge.net/project/scst/scst/scst-3.0.0.tar.bz2
    wget http://heanet.dl.sourceforge.net/project/scst/iscsi-scst/iscsi-scst-3.0.0.tar.bz2
    wget http://heanet.dl.sourceforge.net/project/scst/scstadmin/scstadmin-3.0.0.tar.bz2
    

    We extract them with:

    tar xjf scst-3.0.0.tar.bz2
    tar xjf iscsi-scst-3.0.0.tar.bz2
    tar xjf scstadmin-3.0.0.tar.bz2
    

    Patching the kernel

    You can skip this part if you don't feel like you need or want to patch your kernel.

    apt-get install linux-source kernel-package
    

    We need to extract the kernel source:

    cd /usr/src
    tar xjf linux-source-3.2.tar.bz2
    cd linux-source-3.2
    

    Now we first copy the kernel configuration from the current system:

    cp /boot/config-3.2.0-4-amd64 .config
    

    We patch the kernel with two patches:

    patch -p1 < /usr/src/scst-3.0.0/kernel/scst_exec_req_fifo-3.2.patch
    patch -p1 < /usr/src/iscsi-scst-3.0.0/kernel/patches/put_page_callback-3.2.57.patch
    

    It seems that for many different kernel versions, separate patches can be found in the above paths. If you follow these steps at a later date, please check the version numbers.

    The patches are based on stock kernels from kernel.org. I've applied the patches against the Debian-patched kernel and faced no problems, but your milage may vary.

    Let's build the kernel (will take a while):

    yes | make-kpkg -j $(nproc) --initrd --revision=1.0.custom.scst kernel_image
    

    The 'yes' is piped into the make-kpkg command to answer some questions with 'yes' during compilation. You could also add the appropriate value in the .config file.

    The end-result of this command is a kernel package in .deb format in /usr/src. Install it like this:

    dpkg -i /usr/src/<custom kernel image>.deb
    

    Now reboot into the new kernel:

    reboot
    

    Compiling SCST, ISCS-SCST and SCSTADMIN

    cd /usr/src/scst-3.0.0
    make install
    
    cd /usr/src/iscsi-scst-3.0.0
    make install
    
    cd /usr/src/scstadmin-3.0.0
    make install
    

    Make SCST start at boot

    On Debian Jessie:

    systemctl enable scst.service
    

    Configure SCST

    Copy the example configuration file to /etc:

    cp /usr/src/iscsi-scst-3.0.0/etc/scst.conf /etc
    

    Edit /etc/scst.conf to your liking. This is an example:

    HANDLER vdisk_fileio {
            DEVICE disk01 {
                    filename /dev/sdb
                    nv_cache 1
            }
    }
    
    TARGET_DRIVER iscsi {
            enabled 1
    
            TARGET iqn.2015-10.net.vlnb:tgt {
                    IncomingUser "someuser somepasswordof12+chars"
                    HeaderDigest   "CRC32C,None"
                    DataDigest   "CRC32C,None"
                    LUN 0 disk01
    
                    enabled 1
            }
    }
    

    Please note that the password must be at least 12 characters.

    After this, you can start the SCST module and connect your initiator to the appropriate LUN.

    /etc/init.d/scst start
    

    Closing words

    It turned out that setting up SCST and compiling a kernel wasn't that much of a hassle. The main issue with patching kernels is that you have to repeat the procedure every time a new kernel version is released. And there is always a risk that a new kernel version breaks the SCST patches.

    However, the whole process can be easily automated and thus run as a test in a virtual environment.

    Tagged as : iSCSI SCST
  2. Why I Do Use ZFS as a File System for My NAS

    Thu 29 January 2015

    On February 2011, I posted an article about my motivations why I did not use ZFS as a file system for my 18 TB NAS.

    You have to understand that at the time, I believe the arguments in the article were relevant, but much has changed since then, and I do believe this article is not relevant anymore.

    My stance on ZFS is in the context of a home NAS build.

    I really recommend giving ZFS a serious consideration if you are building your own NAS. It's probably the best file system you can use if you care about data integrity.

    ZFS may only be available for non-Windows operating systems, but there are quite a few easy-to-use NAS distros available that turn your hardware into a full-featured home NAS box, that can be managed through your web browser. A few examples:

    I also want to add this: I don't think it's wrong or particular risky if you - as a home NAS builder - would decide not to use ZFS and select a 'legacy' solution if that better suits your needs. I think that proponents of ZFS often overstate the risks ZFS mitigates a bit, maybe to promote ZFS. I do think those risks are relevant but it all depends on your circumstances. So you decide.

    May 2016: I have also written a separate article on how I feel about using ZFS for DIY home NAS builds.

    Arstechnica article about FreeNAS vs NAS4free.

    If you are quite familiar with FreeBSD or Linux, I do recommend this ZFS how-to article from Arstechnica. It offers a very nice introduction to ZFS and explains terms like 'pool' and 'vdev'.

    If you are planning on using ZFS for your own home NAS, I would recommend reading the following articles:

    My historical reasons for not using ZFS at the time

    When I started with my 18 TB NAS in 2009, there was no such thing as ZFS for Linux. ZFS was only available in a stable version for Open Solaris. We all know what happened to Open Solaris (it's gone).

    So you might ask: "Why not use ZFS on FreeBSD then?". Good question, but it was bad timing:

    The FreeBSD implementation of ZFS became only stable [sic] in January 2010, 6 months after I build my NAS (summer 2009). So FreeBSD was not an option at that time.
    

    One of the other objections against ZFS is the fact that you cannot expand your storage by adding single drives and growing the array as your data set grows.

    A ZFS pool consists of one or more VDEVs. A VDEV is a traditional RAID-array. You expand storage capacity by expanding the ZFS pool, not the VDEVS. You cannot expand the VDEV itself. You can only add VDEVS to a pool.

    So ZFS either forces you to invest in storage you don't need upfront, or it forces you invest later on because you may waste quite a few extra drives on parity. For example, if you start with a 6-drive RAID6 (RAIDZ) configuration, you will probably expand with another 6 drives. So the pool has 4 parity drives on 12 total drives (33% loss). Investing upfront in 10 drives instead of 6 would have been more efficient because you only lose 2 drives out of 10 to parity (20% loss).

    So at the time, I found it reasonable to stick with what I knew: Linux & MDADM.

    But my new 71 TiB NAS is based on ZFS.

    I wrote an article about my worry that ZFS may die with FreeBSD as it sole backing, but fortunately, I've been proven very, very wrong.

    ZFS is now supported on FreeBSD and Linux. Despite some licencing issues that prevent ZFS from being integrated in the Linux kernel itself, it can still be used as a regular kernel module and it works perfectly.

    There is even an open-source ZFS consortium that brings together all the developers for the different operating systems supporting ZFS.

    ZFS is here to stay for a very long time.

    Tagged as : ZFS
  3. FreeBSD 10.1 Unattended Install Over PXE & HTTP (No NFS)

    Fri 16 January 2015

    To gain some more experience with FreeBSD, I decided to make a PXE-based unattended installation of FreeBSD 10.1.

    My goal is to set something up similar to Debian/Ubuntu + preseeding or Redhat/CentOS + kickstart.

    Getting a PXE-based unattended installation of FreeBSD 10.1 was not easy and I was unable to automate a ZFS-based install using bsdinstall.

    I would expect someting like the netboot install

    Below, I've documented what I've done to do a basic installation of FreeBSD using only DHCP, TFTP, no NFS required.

    Overview of all the steps:

    1. have a working DHCP with PXE boot options
    2. have a working TFTP server
    3. customise your pxelinux boot menu
    4. install a FreeBSD box manually, or use an existing one
    5. download and install mfsbsd on the FreeBSD system
    6. download a FreeBSD release iso image on the FreeBSD system
    7. configure and customise your FreeBSD PXE boot image settings
    8. build the PXE boot image and copy it to your TFTP server
    9. PXE boot your system and boot the FreeBSD image

    Setting up a DHCP server + TFTP server

    Please take a look at another article I wrote on setting up PXE booting.

    Configuring the PXE boot menu

    Add these lines to your PXE Menu:

    LABEL FreeBSD10
    kernel memdisk
    append initrd=BSD/FreeBSD/10.1/mfsbsd-10.1-RC3-amd64.img harddisk raw
    

    Setup or gain access to a FreeBSD host

    You need to setup or gain access to a FreeBSD system, because the mfsbsd tool only works on FreeBSD. You will use this system to generate a FreeBSD PXE boot image.

    Installing mfsbsd

    First we download mfsbsd.

    fetch http://mfsbsd.vx.sk/release/mfsbsd-2.1.tar.gz
    tar xzf mfsbsd-2.1.tar.gz
    

    Then we get a FreeBSD ISO:

    fetch http://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/10.1/FreeBSD-10.1-RELEASE-amd64-disc1.iso
    

    Mount the ISO:

    mdconfig -a -t vnode -f /root/FreeBSD-10.1-RELEASE-amd64-disc1.iso
    mount_cd9660 /dev/md0 /cdrom/
    

    setup rc.local

    Enter the mfsbsd-2.1 directory. Put the following content in the conf/rc.local file.

    fetch http://<yourwebserver>/pxe/installerconfig -o /etc/installerconfig
    tail -n 7 /etc/rc.local > /tmp/start.sh
    chmod +x /tmp/start.sh
    /tmp/start.sh 
    exit 0
    
    #!/bin/csh
    setenv DISTRIBUTIONS "kernel.txz base.txz"
    setenv BSDINSTALL_DISTDIR /tmp
    setenv BSDINSTALL_DISTSITE
    ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/10.1-RELEASE
    
    bsdinstall distfetch 
    bsdinstall script /etc/installerconfig
    

    As you can see there is a script within a script that is executed separately by rc.local. That's a bit ugly but it does work.

    setup installerconfig (FreeBSD unattended install)

    The 'installerconfig' script is a script in a special format used by the bsdinstall tool to automate the installation. The top is used to control variables used during the unattended installation. The bottom is a script executed post-install chrooted on the new system.

    Put this in 'installerconfig'

    PARTITIONS=da0
    DISTRIBUTIONS="kernel.txz base.txz"
    BSDINSTALL_DISTDIR=/tmp
    BSDINSTALL_DISTSITE=ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/10.1-RELEASE
    
    #!/bin/sh
    echo "Installation complete, running in host system"
    echo "hostname=\"FreeBSD\"" >> /etc/rc.conf
    echo "autoboot_delay=\"5\"" >> /boot/loader.conf
    echo "sshd_enable=YES" >> /etc/rc.conf
    echo "Setup done" >> /tmp/log.txt
    echo "Setup done."
    poweroff
    

    As you can see, the post-install script enables SSH, sets the hostname and reduced the autoboot delay.

    Please note that I faced an issue where the bsdinstall program would not interpret the options set in the installerconfig script. This is why I exported them with 'setenv' in the rc.local script.

    With Debian preseeding or Redhat kickstarting, you can host the preseed or kickstart file on a webserver. Changing the PXE-based installation is just a matter of edditing the preseed or kickstart file on the webserver.

    Because it's not fun having to generate a new image every time your want to update your unattended installation, it's recommended to host the installerconfig file on a webserver, as if it is a preseed or kickstart file.

    This saves you from having to regenerate the PXE-boot image file every time.

    You can still put the installer config in the image itself. If you want a fixed 'installerconfig' file containing the bsdinstall instructions, put this file also in the 'conf' directory. Next, edit the Makefile. Search for this string:

    .for FILE in rc.conf ttys
    

    For me, it was at line 315. Change it to:

    .for FILE in rc.conf ttys installerconfig
    

    Building the PXE boot image

    Now everything is configured, we can generate the boot image with mfsbsd. Run 'make'. Then when it fails with this error:

    Creating image file ...
    /root/mfsbsd-2.1/tmp/mnt: write failed, filesystem is full
    *** Error code 1
    
    Stop.
    make: stopped in /root/mfsbsd-2.1
    

    just run 'make' again. In my experience, make worked the second time, consistently. I'm not sure why this happens.

    The end result of this whole process is a file like 'mfsbsd-se-10.1-RC3-amd64.img'.

    You can copy this image to the appropriate folder on your TFTP server. In my example it would be:

    /srv/tftp/BSD/FreeBSD/10.1/mfsbsd-10.1-RC3-amd64.img
    

    Test the PXE installation

    Boot a test machine from PXE and boot your custom generated image.

    Final words

    I'm a bit unhappy about how difficult it was to create an PXE-based unattended FreeBSD installation. The bsdinstall installation software seems buggy to me. However, it could be just me: that I have misunderstood how it al works. However, I can't seem to find any documentation on how to properly use the bsdinstall system for an unattended installation.

    If anyone has suggestions or ideas to implement an unattended bsdinstall script 'properly', with ZFS support, I'm all ears.

    This is the recipe I tried to use to get a root-on-zfs install:

    ZFSBOOT_POOL_NAME=TEST_ROOT
    ZFSBOOT_VDEV_TYPE=mirror
    ZFSBOOT_POOL_SIZE=10g
    ZFSBOOT_DISKS="da0 da1"
    ZFSBOOT_SWAP_SIZE=2g
    ZFSBOOT_CONFIRM_LAYOUT=1
    

    The installer would never recognise the second disk and the script would get stuck.

    I'm aware that mfsbsd has an option to use a custom root-on-zfs script, but I wanted to use the 'official' FreeBSD tools.

    Tagged as : PXE

Page 16 / 73