1. ZFS RAIDZ Expansion Is Awesome but Has a Small Caveat

    Tue 22 June 2021

    Introduction

    One of my most popular blog articles is this article about the "Hidden Cost of using ZFS for your home NAS". To summarise the key argument of this article:

    Expanding ZFS-based storge can be relatively expensive / inefficient.

    For example, if you run a ZFS pool based on a single 3-disk RAIDZ vdev (RAID5 equivalent2), the only way to expand a pool is to add another 3-disk RAIDZ vdev1.

    You can't just add a single disk to the existing 3-disk RAIDZ vdev to create a 4-disk RAIDZ vdev because vdevs can't be expanded.

    The impact of this limitation is that you have to buy all storage upfront even if you don't need the space for years to come.

    Otherwise, by expanding with additional vdevs you lose capacity to parity you may not really want/need, which also limits the maximum usable capacity of your NAS.

    RAIDZ vdev expansion

    Fortunately, this limitation of ZFS is being addressed!

    ZFS founder Matthew Ahrens created a pull request around June 11, 2021 detailing a new ZFS feature that would allow for RAIDZ vdev expansion.

    Finally, ZFS users will be able to expand their storage by adding just one single drive at a time. This feature will make it possible to expand storage as-you-go, which is especially of interest to budget conscious home users3.

    Jim Salter has written a good article about this on Ars Technica.

    There is still a caveat

    Existing data will be redistributed or rebalanced over all drives, including the freshly added drive. However, the data that was already stored on the vdev will not be restriped after the vdev is expanded. This means that this data is stored with the older, less efficient parity-to-data ratio.

    I think Matthew Ahrends explains it best in his own words:

    After the expansion completes, old blocks remain with their old data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but distributed among the larger set of disks. New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ vdev's "assumed parity ratio" does not change, so slightly less space than is expected may be reported for newly-written blocks, according to zfs list, df, ls -s, and similar tools.
    

    So, if you add a new drive to a RAIDZ vdev, you'll notice that after expansion, you will have less capacity available than you would theoretically expect.

    However, it is even more important to understand that this effect accumulates. This is especially relevant for home users.

    I think that the whole concept of starting with a small number of disks and expand-as-you-go is very desirable and typical for home users. But this also means that every time a disk is added to the vdev, existing data is still stored with the old data-to-parity rate.

    Imagine that we have a 10-drive chassis and we start out with a 4-drive RAIDZ2.

    If we keep adding drives5 conform this example, until the chassis is full at 10 drives, about 1.35 drives worth of capacity is 'lost' to parity overhead/efficiency loss4.

    That is quite a lot of overhead or loss of capacity, I think.

    How is this overhead calculated? If we would just buy 10 drives and create a 10-drive RAIDZ2 vdev, data-to-parity overhead is 20% meaning that 20% of the total raw capacity of the vdev is used for storing parity. This is the most efficient scenario in this case.

    When we start out with the four-drive RAIDZ2 vdev, the data-to-parity overhead is 50%. That's a 30% overhead difference compared to the 'ideal' 10-drive setup.

    As we keep adding drives, the relative overhead of the parity keeps dropping so we end up with 'multiple data sets' with different data-to-parity ratios, that are less efficient than the end-stage of 10 drives.

    I created a google sheet to roughly estimate this overhead for each stage, but my math was totally off. Fortunately, Yorick rewrote the sheet, which can be found here. Thanks Yorick! Further more, Truenas user DayBlur shared additional insights on the calculations if you are interested in that.

    The google sheet allows you to play with various variables to estimate how much capacity is lost for a given scenario. Please note that any losses that may arise because a number of drives is used that requires data to be padded - as discussed in the Ars Technica article - are not part of the calculation.

    It is a bit unfortunate that especially in the scenario of the home user who want to start small and expand-as-you go that this overhead manifests itself so much. But there is good news!

    Lost capacity can be recovered!

    The overhead or 'lost capacity' can be recovered by rewriting existing data after the vdev has been expanded, because the data will then be written with the more efficient parity-to-data ratio of the larger vdev.

    Rewriting all data may take quite some time and you may opt to postpone this step until the vdev has been expanded a couple of times so the parity-to-data ratio is now 'good enough' that significant storage gains can be had by rewriting the data.

    Because capacity lost to overhead can be fully recovered, I think that this caveat is relatively minor, especially compared to the old situation where we had to expand a pool with entire vdevs and there was no way to recover any overhead.

    There is currently no build-in mechanism to trigger this data rewrite as part of the native ZFS tools. This will be a manual process until somebody may create a script that automates this process. According to Matthew Ahrens, restriping the data as part of the vdev expansion process would be an effort of similar scale as the RAIDZ expansion itself.

    Evaluation

    I think it cannot be stated enough how awesome the RAIDZ vdev expansion feature is, especially for home users who want to start small and grow their storage over time.

    Although the expansion process can accumulate quite a bit of overhead, that overhead can be recovered by rewriting existing data, which is probably not a problem for most people.

    Despite all the awesome features and capabilities of ZFS, I think quite a few home users went with other storage solutions because of the relatively high expansion cost/overhead. Now that this barrier will be overcome, I think that ZFS will be more accessible to the home user DIY NAS crowd.

    Release timeline

    According to the Ars Technica article by Jim Salter, this feature will probably become available in August 2022, so we need to have some patience. Even so, you might want to already decide to build your new DIY NAS based on ZFS: by the time you may need to expand your storage, the feature may be available!

    Update on some - in my opinion - bad advice

    The podcast 2.5 admins (which I enjoy listening to) discussed the topic of RAIDZ expansion in episode 45.

    There are two remarks made that I want to address, because I disagree with them.

    Don't rewrite the data?

    As in his Ars Technica article, Jim Salter keeps advocating not to bother rewriting the data after a vdev expansion, but I personally disagree with this advice. I hope I have demonstrated that if you keep adding drives, the parity overhead is significant enough for most home users to make it worthwhile to rewrite the data after a few drives have been added.

    Just use mirrors!

    I also disagree with the advice of using mirrors, especially for home users6. I personally think it is bad advice, because home users have other needs and desires as enterprise environments.

    If 'just use mirrors' is still the advice, why did Matthew Ahrends build the whole RAIDZ vdev expansion feature in the first place? I think the RAIDZ vdev expansion is really beneficial for home users.

    Maybe Jim and I have very different ideas about what a home user would want or need in a DIY NAS storage solution. I think that home users want this:

    As much storage as possible for as little money as possible with acceptable redundancy.

    In addition, I think that home users in general work with larger files (multiple megabytes at least). And if they sometimes work with smaller files, they accept some performance loss due to the lower random I/O performance of single RAIDZ vdevs7.

    Frankly, to me it feels like the 'just use mirrors' advice is used to 'downplay' a significant limitation of ZFS8. Jim is a prolific writer on Ars Technica and has a large audience so his advice matters. So that's why I think it's sad that he sticks with 'just use mirrors' while that's clearly not in the best interest of most home users.

    However, that's just my opinion, you decide for yourself what's best.


    1. The other method is to replace all existing drives one by one with larger ones. Only after you have replaced all drives will you be able to gain extra capacity so this method has a similar downside as just expanding with extra vdevs: you must buy multiple drives at once. In addition, I think this method is rather time consuming and cumbersome although people do use it to expand capacity. And to be fair: you can indeed add 4+ disk vdevs, vdevs with a higher RAIDZ level or mirrors but none of that makes sense in this context. 

    2. Just to illustrate the level of redundancy in terms of how many disks can be lost and still be operational. 

    3. I personally think that it's even great for small and medium business owners. Only larger businesses want to keep adding relatively large vdevs consisting of multiple drives because if they keep expanding with just one drive at a time, they may have to expand capacity very frequently which may not be practical. 

    4. If you would only upgrade once the pool is almost full - not recommended! - that overhead grows to 1.69 drives. 

    5. So you go from four to five drives. Then from five to six drives, and so on. 

    6. If random I/O performance is important, it is probably wise to go for SSD based storage anyway. 

    7. resolved by by ZFS vdev expansion obviously, when it lands in production. 

    Tagged as : Storage
  2. Recycle Your Old Laptop Display and Turn It Into a Monitor

    Sat 13 March 2021

    During a cleaning spree I decided that it was time to recycle two old laptops that were just collecting dust on a shelf for many years. Although I didn't have any purpose for them anymore, I realised that the displays were still perfectly fine.

    This is the display of my old 13" Intel-based MacBook.

    screen

    Somehow it felt wasteful to just throw the laptops out, I wondered if it would be possible to use these displays as a regular monitor?

    It turns out: yes this is possible and it is also quite simple. My blogpost covers the same topic, with a few more pictures1.

    There is a particular LCD driver board that can be found all over ebay and the well-known Chinese webshops.

    controller2

    These boards cost around $20 and come with most things you need to get the display operational.

    The board includes:

    1. A small controller print for the on-screen display (center)
    2. A high-voltage power supply for the backlight (left)
    3. The data cable to drive the actual display (right)
    4. Support for audio-passthrough + volume control (right corner)

    The board doesn't include:

    1. A 12-volt power supply
    2. Some kind of frame to make the display and board one easy to handle unit

    controller3

    The board supports HDMI, VGA and DVI for signal input so it should work with almost any computer.

    This particular board (M.NT68676.2) is used to power many different panel models. Although the board itself may be the same, it's important to order the board that is specifically tailored to the particular LCD panel you have. The panels seem easy to identify. This is the Macbook LCD panel type (LP133WX1 TL A1):

    model

    That TL/A1 part of the model is critical to finding the appropriate controller.

    I also have an old Dell Vostro screen that uses the exact same driver board, but the cables are different. Also, it may be the case that the boards are flashed with the appropriate firmware for the particular panel. So I would recommend not to gamble and get the driver board that exactly matches the model number.

    To make everything work, we first connect the high-voltage module to the backlight power cable...

    hv

    ...and we also connect the LCD driver cable:

    driver

    When I connected everything for the first time, the display didn't work at all. It turns out that the board shipped with a second, separate interface cable and I had to swap those cables to make the display work properly.

    Power supply

    According to the sales page, the board requires a 12-volt 2A adapter, but in practice, I could get away with a tiny, much weaker power supply.

    adapter

    I found an old adapter from a Wi-Fi access-point (long gone) which is rated for just 0.5A. It did power both the board and the screen perfectly. It worked with both the Macbook display as the Dell display so it doesn't seem to be a fluke.

    Although I didn't measure actual power consumption, we know that it can't be more than just 6 Watt because that's what the power adapter can handle.

    Laptop displays need to be power efficient and that may also make them interesting for low-power or battery-powered projects.

    We've only just begun

    The display works, but that was the easy part.

    display

    The hardest part of this proces (not pictured) was switching the on-screen-display from Chinese (which I don't master) to English. But there is more work ahead.

    At this point we end up with just a fragile LCD panel connected to a driver board through a bunch of wires. The whole setup is just an unpractical mess. There are at least two things left to do:

    1. Mount the driver board, OSD controller and high-voltage unit to the back of the LCD panel
    2. Make some kind of stand

    For the old Dell display, I used a bit of wood and hot glue to make a wooden scaffold on which I could mount the driver board with a few screws. It won't win any prizes for sure but it's just an example of what needs to be done make the display (more) manageable.

    amateur

    It still doesn't have a stand but that's for another day. I can imagine that if you own a 3D printer, you can make a nice case with a stand, although that will increase the cost of the project.

    Evaluation

    What I like most about this kind of project is the fact that for very little money, you can recycle a perfectly fine and usable display that will probably last for another five to ten years. The project takes very little effort and it is also not difficult to do.

    You can augment existing hobby projects with a screen and due to the relatively low power consumption, it may even be suitable for battery-powered projects.

    And with a bit of work you can make a nice (secondary) monitor out of them. Finally you have an excuse to dust off one of your unused Raspberry Pis that you had to have but didn't have any actual use for.

    The thrift-store is cheaper

    If your goal is just to get a cheap LCD display, it may be cheaper to go to the nearest thrift-store and buy some old second-hand display for $10. But that may have some drawbacks:

    • It will be much larger than the laptop screen
    • It is powered by 110/220 volt so less suitable for a battery-powered setup
    • overall power consumption will be higher

    So it all depends on your particular needs.

    Closing words

    If you also repurposed a laptop monitor for a project or just as a (secondary) screen, feel free to share your work in the comments.


    1. I posted the article on slashdigit to hacker news and it got quite an interest

    Tagged as : Hardware
  3. Understanding the Ubuntu 20.04 LTS Server Autoinstaller

    Thu 11 February 2021

    Introduction

    Ubuntu Server version 18.04 LTS uses the debian-installer (d-i) for the installation process. This includes support for 'preseeding' to create unattended (automated) installations of ubuntu.

    d-i

    the debian installer

    With the introduction of Ubuntu Server 20.04 'Focal Fossa' LTS, back in April 2020, Canonical decided that the new 'subiquity server installer' was ready to take it's place.

    After the new installer gained support for unattended installation, it was considered ready for release. The unattended installer feature is called 'Autoinstallation'.

    I mostly run Ubuntu 18.04 LTS installations but I decided in February 2021 that I should get more acquainted with 20.04 LTS, especially when I discovered that preseeding would no longer work.

    In this article I assume that the reader is familiar with PXE-based unattended installations.

    Why this new installer?

    Canonical's desire to unify the codebase for Ubuntu Desktop and Server installations seems to be the main driver for this change.

    From my personal perspective, there aren't any new features that benefit my use-cases, but that could be different for others. It's not a ding on the new Autoinstaller, it's just how I look at it.

    There is one conceptual difference between the new installer and preseeding. A preseed file must answer all questions that the installer needs answered. It will switch to interactive mode if a question is not answered, breaking the unattended installation process. From my experience, there is a bit of trial-and-error getting the preseed configuration right.

    The new Subiquity installer users defaults for all installation steps. This means that you can fully automate the installation proces with just a few lines of YAML. You do't need an answer for each step.

    The new installer as other features such as the ability to SSH into an installer session. It works by generating an at-hoc random password on the screen / console which you can use to logon remotely over SSH. I have not used it yet as I never found it necessary.

    Documentation is a bit fragmented

    As I was trying to learn more about the new Autoinstaller, I noticed that there isn't a central location with all relevant (links to) documentation. It took a bit of searching and following links, to collect a set of useful information sources, which I share below.

    Link Description
    Reference Manual Reference for each particular option of the user-data YAML
    Introduction An overview of the new installer with examples
    Autoinstall quick start Example of booting a VM using the installer using KVM
    Netbooting the installer A brief instruction on how to setup PXE + TFTP with dnsmasq in order to PXE boot the new installer
    Call for testing Topic in which people give feedback on their experience with the installer (still active as of February 2021) with a lot of responses
    Stack Exchange post Detailed tutorial with some experiences.
    Medium Article Contains an example and some experiences.
    github examples Github repo with about twenty examples of more complex configurations

    The reference documentation only supports some default use-cases for the unattended installation process. You won't be able to build more complex configuration such as RAID-based installations using this reference.

    Under-the-hood, the installer uses curtin. The linked documentation can help you further build complex installations, such as those which use RAID.

    I think the curtin syntax is a bit tedious and fortunately it is probably not required to learn curtis and piece together more complex configurations by hand. There is a nice quality-of-life feature that takes care of this.

    More on that later.

    How does the new installer work?

    With a regular PXE-based installation, we use the 'netboot' installer, which consists of a Linux kernel and an initrd image (containing the actual installer).

    This package is about 64 Megabytes for Ubuntu 18.04 LTS and it is all you need, assuming that you have already setup a DHCP + TFTP + HTTP environment for a PXE-based installation.

    The new Subiquity installer for Ubuntu 20.04 LTS deprecates this 'netboot' installer. It is not provided anymore. Instead, you have to boot a 'live installer' ISO file which is about 1.1 GB in size.

    The process looks like this:

    1. Download the Live installer ISO
    2. Mount the iso to acquire the 'vmlinuz' and 'initrd' files for the TFTP root
    3. Update your PXE menu (if any) with a stanza like this:
    LABEL focal2004-preseed
                    MENU LABEL Focal 20.04 LTS x64 Manual Install
                    KERNEL linux/ubuntu/focal/vmlinuz
                    INITRD linux/ubuntu/focal/initrd
                    APPEND root=/dev/ram0 ramdisk_size=1500000 ip=dhcp url=http://10.10.11.1/ubuntu-20.04.1-live-server-amd64.iso
    

    This process is documented here with detailed how-to steps and commands.

    We have not yet discussed the actual automation part, but first we must address a caveat.

    The new installer requires 3GB of RAM when PXE-booting

    Although it is not explicitly documented1, the new mechanism of PXE-booting the new Ubuntu installer using the Live ISO requires a minimum of 3072 MB of memory. And I assure you, 3000 MB is not enough.

    It seems that the ISO file is copied into memory over the network and extracted on the RAM disk. With a RAM disk of 1500 MB and an ISO file of 1100 MB we are left with maybe 472 MB of RAM for the running kernel and initrd image.

    To put this into perspective: I could perform an unattended install of Ubuntu 18.04 LTS with only 512 MB of RAM2.

    Due to this 'new' installation process, Ubuntu Server 20.04 has much more demanding minimum system requirements than Windows 2019 Server, which is fine with 'only' 512 MB of RAM, even during installation. I have to admit I find this observation a bit funny.

    It seems that this 3 GB memory requirement for the installation process is purely and solely because of the new installation process. Obviously Ubuntu 20.04 can run in a smaller memory footprint once installed.

    Under the hood, a tool called 'casper' is used to bootstrap the installation process and that tool only supports a local file system (in this case on a RAM disk). On paper, casper does support installations using NFS or CIFS but that is not supported nor tested. From what I read, some people tried to use it but it didn't work out.

    As I undertand it, the curent status is that you can't install Ubuntu Server 20.04 LTS on any hardware with less than 3GB of memory using PXE-boot. This probably affects older and less potent hardware, but I can remember a time that this was actually part of the point of running Linux.

    It just feels wrong conceptually, that a PXE-based server installer requires 3GB of memory.


    Important

    The 3GB memory requirement is only valid for PXE-based installations. If you boot from ISO / USB stick you can install Ubuntu Server on a system with less memory. I've verified this with a system with only 1 GB of memory.


    The Autoinstall configuration

    Now let's go back to the actual automation part of Autoinstaller. If we would want to automate our installation, our PXE menu item should be expanded like this:

    APPEND root=/dev/ram0 ramdisk_size=1500000 ip=dhcp url=http://10.10.11.1/ubuntu-20.04.1-live-server-amd64.iso autoinstall ds=nocloud-net;s=http://10.10.11.1/preseed/cloud-init/
    

    In this case, the cloud-init folder - as exposed through an HTTP-server - must contain two files:

    • meta-data
    • user-data

    The meta-data file contains just one line:

    instance-id: focal-autoinstall
    

    The user-data file is the equivalent of a preseed file, but it is YAML-based instead of just regular plain text.

    Minimum working configuration

    According to the documentation, this is a minimum viable configuration for the user-data file:

    #cloud-config
    autoinstall:
      version: 1
      identity:
        hostname: ubuntu-server
        password: "$6$exDY1mhS4KUYCE/2$zmn9ToZwTKLhCw.b4/b.ZRTIZM30JZ4QrOQ2aOXJ8yk96xpcCof0kxKwuX1kqLG/ygbJ1f8wxED22bTL4F46P0"
        username: ubuntu
    

    This works fine and performs a basic install + apt upgrade of the new system with a disk configuration based on an LVM layout.

    My preferred minimum configuration:

    Personally, I like to keep the unattended installation as simple as possible. I use Ansible to do the actual system configuration, so the unattended installation process only has to setup a minimum viable configuration.

    I like to configure the following parameters:

    • Inject a SSH public key into the authorized_keys file for Ansible
    • Configure the Apt settings to specify which repository to use during installation (I run a local debian/ubuntu mirror)
    • Update to the latest packages during installation

    I'll give examples of the required YAML to achieve this with the new installer.

    Injecting a public SSH key for the default user:

    ssh:
        authorized-keys: |
           ssh-rsa <PUB KEY>
        install-server: true
        allow-pw: no
    

    Notice that we also disable password-authentication for SSH access.

    Configure APT during installation

    I used this configuration to specify a particular mirror for both the installation process and for the system itself post-installation.

    Mirror:
      mirror: "http://mirror.mynetwork.loc"
    apt:
      preserve_sources_list: false
      primary:
        - arches: [amd64]
          uri: "http://mirror.mynetwork.loc/ubuntu"
    

    Performing an apt upgrade

    By default, the installer seems to install security updates but it doesn't install the latest version of softare. This is a deviation from the d-i installer which always ends up with a fully up-to-date system, when done.

    The same end-result can be accomplished by running an apt update and apt upgrade at the end of the installation process.

    late-commands:
    - curtin in-target --target=/target -- apt update           
    - curtin in-target --target=/target -- apt upgrade -y
    

    So all in all, this is not a big deal.

    Network configuration

    The network section can be configured using regular Netplan syntax. An example:

    network:
      version: 2
      renderer: networkd
      ethernets:
        enp0s3:
          dhcp4: no
          addresses:
            - 10.10.50.200/24
          gateway4: 10.10.50.1
          nameservers:
            search:
              - mynetwork.loc
            addresses:
              - 10.10.50.53
              - 10.10.51.53
    

    Storage configuration

    The installer only supports a 'direct' or 'lvm' layout. It also selects the largest drive in the system as the boot drive, for installation.

    If you want to setup anything more complex as you may want to setup RAID or a specific partion layout, you need to use the curtin syntax.

    It is not immediately clear how to setup a RAID configuration based on the available documentation.

    Fortunately, the new installer supports creating a RAID configuration or custom partition layout if you perform a manual install.

    It turns out that when the manual installation is done you can find the cloud-init user-data YAML for this particular configuration in the following file:

        /var/log/installer/autoinstall-user-data
    

    I think this is extremely convenient.

    So to build a proper RAID1-based installation, I followed these instructions.

    So what does the YAML for this RAID setup look like?

    This is the storage section of my user-data file (brace yourself):

    storage:
        config:
        - {ptable: gpt, serial: VBOX_HARDDISK_VB50546281-4e4a6c24, path: /dev/sda, preserve: false,
          name: '', grub_device: true, type: disk, id: disk-sda}
        - {ptable: gpt, serial: VBOX_HARDDISK_VB84e5a275-89a2a956, path: /dev/sdb, preserve: false,
          name: '', grub_device: true, type: disk, id: disk-sdb}
        - {device: disk-sda, size: 1048576, flag: bios_grub, number: 1, preserve: false,
          grub_device: false, type: partition, id: partition-0}
        - {device: disk-sdb, size: 1048576, flag: bios_grub, number: 1, preserve: false,
          grub_device: false, type: partition, id: partition-1}
        - {device: disk-sda, size: 524288000, wipe: superblock, flag: '', number: 2, preserve: false,
          grub_device: false, type: partition, id: partition-2}
        - {device: disk-sdb, size: 524288000, wipe: superblock, flag: '', number: 2, preserve: false,
          grub_device: false, type: partition, id: partition-3}
        - {device: disk-sda, size: 1073741824, wipe: superblock, flag: '', number: 3,
          preserve: false, grub_device: false, type: partition, id: partition-4}
        - {device: disk-sdb, size: 1073741824, wipe: superblock, flag: '', number: 3,
          preserve: false, grub_device: false, type: partition, id: partition-5}
        - {device: disk-sda, size: 9136242688, wipe: superblock, flag: '', number: 4,
          preserve: false, grub_device: false, type: partition, id: partition-6}
        - {device: disk-sdb, size: 9136242688, wipe: superblock, flag: '', number: 4,
          preserve: false, grub_device: false, type: partition, id: partition-7}
        - name: md0
          raidlevel: raid1
          devices: [partition-2, partition-3]
          spare_devices: []
          preserve: false
          type: raid
          id: raid-0
        - name: md1
          raidlevel: raid1
          devices: [partition-4, partition-5]
          spare_devices: []
          preserve: false
          type: raid
          id: raid-1
        - name: md2
          raidlevel: raid1
          devices: [partition-6, partition-7]
          spare_devices: []
          preserve: false
          type: raid
          id: raid-2
        - {fstype: ext4, volume: raid-0, preserve: false, type: format, id: format-0}
        - {fstype: swap, volume: raid-1, preserve: false, type: format, id: format-1}
        - {device: format-1, path: '', type: mount, id: mount-1}
        - {fstype: ext4, volume: raid-2, preserve: false, type: format, id: format-2}
        - {device: format-2, path: /, type: mount, id: mount-2}
        - {device: format-0, path: /boot, type: mount, id: mount-0}
    

    That's quite a long list of instructions to 'just' setup a RAID1. Although I understand all the steps involved, I'd never have come up with this by myself on short notice using just the documentation, so I think that the autoinstall-user-data file is a life saver.

    After the manual installation, creating a RAID1 mirror, I copied the configuration above into my own custom user-data YAML. Then I performed an unattended installation and it worked on the first try.

    So if you want to add LVM into the mix or make some other complex storage configuration, the easiest way to automate it, is to first do a manual install and then copy the relevant storage section from the autoinstall-user-data file to your custom user-data file.

    Example user-data file for download

    I've published a working user-data file that creates a RAID1 here.

    You can try it out by creating a virtual machine with two (virtual) hard drives. I'm assuming that you have a PXE boot environment setup.

    Obviously, you'll have to change the network settings for it to work.

    Generating user passwors

    If you do want to logon onto the console with the default user, you must generate a salt+password hash and copy/past that into the user-data file.

    I juse the 'mkpasswd' command for this like so:

    mkpasswd -m sha-512
    

    The mkpasswd utility is part of the 'whois' package.

    Closing words

    For those who are still on Ubuntu Server 18.04 LTS, there is no need for immediate action as this version is supported until 2023. Only support for new hardware has come to an end for the 18.04 LTS release.

    At some point, some time and effort will be required to migrate towards the new Autoinstall solution. Maybe this blogpost helps you with this transition.

    It took me a few evenings to master the new user-data solution, but the fact that a manual installation basically results in a perfect pre-baked user-data file3 is a tremendous help.

    I think I just miss the point of all this effort of revamping installers but maybe I'm not hurt by the limitations of the older existing solution. If you have any thoughts on this, feel free to let me know in the comments.


    1. I had to discover this information on StackExchange somewhere down below in this very good article after experiencing problems. 

    2. I tried 384 MB and it didn't finish, just got stuck. 

    3. /var/log/installer/autoinstall-user-data 

    Tagged as : Linux

Page 2 / 71