1. Understanding Ceph: Open-Source Scalable Storage

    August 19, 2018

    Introduction

    In this blog post I will try to explain why I believe Ceph is such an interesting storage solution. After you finished reading this blog post you should have a good high-level overview of Ceph.

    I've written this blog post purely because I'm a storage enthusiast and I find Ceph interesting technology.

    What is Ceph?

    Ceph is a software-defined storage solution that can scale both in performance and capacity. Ceph is used to build multi-petabyte storage clusters.

    For example, Cern has build a 65 Petabyte Ceph storage cluster. I hope that number grabs your attention. I think it's amazing.

    The basic building block of a Ceph storage cluster is the storage node. These storage nodes are just commodity (COTS) servers containing a lot of hard drives and/or flash storage.

    storage chassis

    Example of a storage node

    Ceph is meant to scale. And you scale by adding additional storage nodes. You will need multiple servers to satisfy your capacity, performance and resiliency requirements. And as you expand the cluster with extra storage nodes, capacity, performance and resiliency (if needed) will all increase at the same time.

    It's that simple.

    You don't need to start with petabytes of storage. You can actually start very small, with just a few storage nodes and expand as your needs increase.

    I want to touch upon a technical detail because it illustrates the mindset surrounding Ceph. With Ceph, you don't even need a RAID controller anymore, a 'dumb' HBA is sufficient. This is possible because Ceph manages redundancy in software. A Ceph storage node at it's core is more like a JBOD. The hardware is simple and 'dumb', the intelligence resides all in software.

    This means that the risk of hardware vendor lock-in is quite mitigated. You are not tied to any particular proprietary hardware.

    What makes Ceph special?

    At the heart of the Ceph storage cluster is the CRUSH algoritm, developed by Sage Weil, the co-creator of Ceph.

    The CRUSH algoritm allows storage clients to calculate which storage node needs to be contacted for retrieving or storing data. The storage client can - on it's own - determine what to do with data or where to get it.

    So to reiterate: given a particular state of the storage cluster, the client can calculate which storage node to contact for storage or retrieval of data.

    Why is this so special?

    Because there is no centralised 'registry' that keeps track of the location of data on the cluster (metadata). Such a centralised registry can become:

    • a performance bottleneck, preventing further expansion
    • a single-point-of-failure

    Ceph does away with this concept of a centralised registry for data storage and retrieval. This is why Ceph can scale in capacity and performance while assuring availability.

    At the core of the CRUSH algoritm is the CRUSH map. That map contains information about the storage nodes in the cluster. That map is the basis for the calculations the storage client need to perform in order to decide which storage node to contact.

    This CRUSH map is distributed across the cluster from a special server: the 'monitor' node. Regardless of the size of the Ceph storage cluster, you typically need just three (3) monitor nodes for the whole cluster. Those nodes are contacted by both the storage nodes and the storage clients.

    cephoverview

    So Ceph does have some kind of centralised 'registry' but it serves a totally different purpose. It only keeps track of the state of the cluster, a task that is way easier to scale than running a 'registry' for data storage/retrieval itself.

    It's important to keep in mind that the Ceph monitor node does not store or process any metadata. It only keeps track of the CRUSH map for both clients and individual storage nodes. Data always flows directly from the storage node towards the client and vice versa.

    Ceph Scalability

    A storage client will contact the appropriate storage node directly to store or retrieve data. There are no components in between, except for the network, which you will need to size accordingly1.

    Because there are no intermediate components or proxies that could potentially create a bottleneck, a Ceph cluster can really scale horizontally in both capacity and performance.

    And while scaling storage and performance, data is protected by redundancy.

    Ceph redundancy

    Replication

    In a nutshell, Ceph does 'network' RAID-1 (replication) or 'network' RAID-5/6 (erasure encoding). What do I mean by this? Imagine a RAID array but now also imagine that instead of the array consisting of hard drives, it consist of entire servers.

    That's what Ceph does: it distributes the data across multiple storage nodes and assures that the copy of a piece of data is never stored on the same storage node.

    This is what happens if a client writes two blocks of data:

    replication

    Notice how a copy of the data block is always replicated to other hardware.

    Ceph goes beyond the capabilities of regular RAID. You can configure more than one replica. You are not confined to RAID-1 with just one backup copy of your data2. The only downside of storing more replicas is the storage cost.

    You may decide that data availability is so important that you may have to sacrifice space and absorb the cost. Because at scale, a simple RAID-1 replication scheme may not sufficiently cover the risk and impact of hardware failure anymore. What if two storage nodes in the cluster die?

    This example or consideration has nothing to do with Ceph, it's a reality you face when you operate at scale.

    RAID-1 or the Ceph equivalent 'replication' offers the best overall performance but as with 'regular' RAID-1, it is not very storage space efficient. Especially if you need more than one replica of the data to achieve the level of redundancy you need.

    This is why we used RAID-5 and RAID-6 in the past as an alternative to RAID-1 or RAID-10. Parity RAID assures redundancy but with much less storage overhead at the cost of storage performance (mostly write performance). Ceph uses 'erasure encoding' to achieve a similar result.

    Erasure Encoding

    With Ceph you are not confined to the limits of RAID-5/RAID-6 with just one or two 'redundant disks' (in Ceph's case storage nodes). Ceph allows you to use Erasure Encoding, a technique that let's you tell Ceph this:

    "I want you to chop up my data in 8 data segments and 4 parity segments"

    erasureencodig

    These segments are then scattered across the storage nodes and this allows you to lose up to four entire hosts before you hit trouble. You will have only 33% storage overhead for redundancy instead of 50% (or even more) you may face using replication, depending on how many copies you want.

    This example does assume that you have at least 8 + 4 = 12 storage nodes. But any scheme will do, you could do 6 data segments + 2 parity segments (similar to RAID-6) with only 8 hosts. I think you catch the idea.

    Ceph failure domains

    Ceph is datacenter-aware. What do I mean by that? Well, the CRUSH map can represent your physical datacenter topology, consisting of racks, rows, rooms, floors, datacenters and so on. You can fully customise your topology.

    This allows you to create very clear data storage policies that Ceph will use to assure the cluster can tollerate failures across certain boundaries.

    An example of a topology:

    topology

    If you want, you can lose a whole rack. Or a whole row of racks and the cluster could still be fully operational, although with reduced performance and capacity.

    That much redundancy may cost so much storage that you may not want to employ it for all of your data. That's no problem. You can create multiple storage pools that each have their own protection level and thus cost.

    How do you use Ceph?

    Ceph at it's core is an object storage solution. Librados is the library you can include within your software project to access Ceph storage natively. There are Librados implementations for the following programming languages:

    • C(++)
    • Java
    • Python
    • PHP

    Many people are looking for more traditional storage solutions, like block storage for storing virtual machines, a POSIX compliant shared file system or S3/OpenStack Swift compatible object storage.

    Ceph provides all those features in addition to it's native object storage format.

    I myself are mostly interested in block storage (Rados Block Device)(RBD) with the purpose of storing virtual machines. As Linux has native support for RBD, it makes total sense to use Ceph as a storage backend for OpenStack or plain KVM.

    With very recent versions of Ceph, native support for iSCSI has been added to expose block storage to non-native clients like VMware or Windows. For the record, I have no personal experience with this feature (yet).

    The Object Storage Daemon (OSD)

    In this section we zoom in a little bit more into the technical details of Ceph.

    If you read about Ceph, you read a lot about the OSD or object storage daemon. This is a service (daemon) that runs on the storage node. The OSD is the actual workhorse of Ceph, it serves the data from the hard drive or ingests it and stores it on the drive. The OSD also assures storage redunancy, by replicating data to other OSDs based on the CRUSH map.

    To be precise: for every hard drive or solid state drive in the storage node, an OSD will be active. Does your storage node have 24 hard drives? Then it runs 24 OSDs.

    And when a drive goes down, the OSD will go down too and the monitor nodes will redistribute an update CRUSH map so the clients are aware and know where to get the data. The OSDs also respond to this update, because redundancy is lost, they may start to replicate non-redundant data to make it redundant again (across fewer nodes).

    When the drive is replaced, the cluster will 'self-heal'. This means that the new drive will be filled with data once again to make sure data is spread evenly across all drives within the cluster.

    So maybe it's interesting to realise that storage clients effectively directly talk to the OSDs that in turn talk to the individual hard drives. There aren't many components between the client and the data itself.

    cephdiagram01

    Closing words

    I hope that this blog post has helped you understand how Ceph works and why it is so interesting. If you have any questions or feedback please feel free to comment or email me.


    1. If you have a ton of high-volume sequential data storage traffic, you should realise that a single host with a ton of drives can easily saturate 10Gbit or theoretically even 40Gbit. I'm assuming 150 MB/s per hard drive. With 36 hard drives you would face 5.4 GB/s. Even if you only would run half that speed, you would need to bond multiple 10Gbit interfaces to sustain this load. Imagine the requirements for your core network. But it really depends on your workload. You will never reach this kind of throughput with a ton of random I/O unless you are using SSDs, for instance. 

    2. Please note that in production setups, it's the default to have a total of 3 instances of a data block. So that means 'the original' plus two extra copies. See also this link. Thanks to sep76 from Reddit to point out that the default is 3 instances of your data. 

    Tagged as : Ceph
  2. Setup a VPN on Your iPhone With OpenVPN and Linux

    June 18, 2018

    [Update 2018] This article has been substantially updated since it was published in 2013.

    Introduction

    In this article, I will show you how to setup a Linux-based OpenVPN server. Once this server is up and running, I'll show you how to setup your iOS devices, such as your iPhone or iPad so that they can connect with your new VPN server.

    The goal of this effort is to encapsulate all internet traffic through your VPN connection so no matter where you are, nobody can monitor which sites you visit and what you do. This is ideal if you have to visit the internet through untrusted internet sources like public Wi-Fi.

    Some typical scenarios would be:

    • you run an OpenVPN service on your Linux-based home router directly
    • you run an OpenVPN service on a device behind your home router using portforwarding (like a Raspberry Pi)
    • you run an OpenVPN service on a VPS hosted by one of many cloud service providers

    Your iOS devices will be running OpenVPN Connect, a free application found in the App store.

    screenshot

    A note on other platforms: Although this tutorial is focussed on iOS devices, your new OpenVPN-based VPN server will support any client OS, may it be Windows, MacOS, Android or Linux. Configuration of these other clients is out-of-scope for this article.

    This tutorial is based on OpenVPN, an open-source product. The company behind OpenVPN also offers VPN services for a price per month. If you find the effort of setting up your own server too much of a hassle, you could look into their service. Please note that I have never used this service and cannot vouch for it.

    This is a brief overview of all the steps you will need to take in order to have a fully functional setup, including configuration of the clients:

    1. Install a Linux server (out-of-scope)
    2. Install the OpenVPN software
    3. Setup the Certificate Authority
    4. Generate the server certificate
    5. Configure the OpenVPN server configuration
    6. Configure the firewall on your Linux server
    7. Generate certificates for every client (iPhone, iPad, and so on)
    8. Copy the client configuration to your devices
    9. Test your clients

    How It Works

    OpenVPN is an SSL-based VPN solution. SSL-based VPNs are very reliable because if you set it up properly, you will never be blocked by any firewall as long as TCP-port 443 is accessible. By default, OpenVPN uses UDP as a transport at port 1194, but you can switch to TCP-port 443 to increase the chance that your traffic will not be blocked at the cost of a little bit more bandwidth usage.

    Authentication

    Authentication is based on public/private key cryptography. The OpenVPN server is similar to an HTTPS server. The biggest difference is that your device doesn't use a username/password combination for authentication, but a certificate. This certificate is stored within the client configuration file.

    So before you can configure and start your OpenVPN service, you need to setup a Certificate Authority (CA). With the CA you can create the server certificate for your OpenVPN server and after that's done, generate all client certificates.

    OpenVPN installation

    OpenVPN is available on most common Linux Distros by default. apt-get install openvpn for any Debian or Ubuntu version is all you need to install OpenVPN.

    Or take a look here

    I have never tried it out, but you can try and take a look at an OpenVPN install script

    This script seems to automate a lot of steps, like firewall configuration, certificate generation, etc.

    Tip

    It's out-of-scope for this tutorial, but you should make sure that you keep your OpenVPN software up-to-date, in case security vulnerabilities are discovered in OpenVPN in the future.

    Security

    I'm creating this tutorial on an older system, with less secure default configuration settings for both the Certificate Authority as the OpenVPN server itself. The settings I use in this tutorial are based on the steps in this blog.

    Notable improvements:

    • AES256 for encryption
    • 2048 bit key sizes over 1024 bit keys
    • SHA256 over sha1/md5

    Performance

    I did some performance tests and got around 40-50 Mbs per iOS client. I believe that the bottleneck lies with my old HP Microserver N40L with its relatively weak CPU.

    Traffic Shaping

    If you want to limit how much bandwidth a client is allowed to use, I recommend to use this tutorial. I have tried it out and it works perfectly.

    Creating a certificate authority.

    For unbuntu: install the package "easy-rsa" and use the 'make-cadir' command instead of the setup instructions below.

    I assume that you will setup your OpenVPN configuration in /etc/openvpn. Before you can setup the server configuration, you need to create a certificate authority. I used the folder /etc/openvpn/easy-rsa as the location for my CA.

    mkdir /etc/openvpn/easy-rsa
    

    We start with copying all these files to this new directory:

    cp -R /usr/share/doc/openvpn/examples/easy-rsa/2.0* /etc/openvpn/easy-rsa
    

    Please note that depending on your Linux flavour, these files may be found at some other path.

    Next, we cd into the destination directory.

    cd /etc/openvpn/easy-rsa
    

    Now, open the 'vars' file with your favorite text editor. The following instructions are straight from the OpenVPN howto.

    You should change all the values to ones that apply to you (obviously).

    export KEY_COUNTRY="US"
    export KEY_PROVINCE="California"
    export KEY_CITY="San Fransisco"
    export KEY_ORG="My Company"
    export KEY_EMAIL="my@mail.com"
    export KEY_CN=server
    export KEY_NAME=server
    export KEY_OU=home
    

    Change the KEY_SIZE parameter:

    export KEY_SIZE=2048
    

    How long would you like your certificates to be valid (10 years?)

    export CA_EXPIRE=3650
    export KEY_EXPIRE=3650
    

    Then I had to copy openssl-1.0.0.cnf to openssl.cnf because the 'vars' script complained that it couldn't find the latter file.

    cp openssl-1.0.0.cnf openssl.cnf
    

    Notice I went through these steps on an older Linux installation. I had to edit the file /etc/openvpn/easy-rsa/pkitool and changed all occurrences of 'sha1' to 'sha256'.

    Now we 'source' var and run two additional commands that actually generate the certificate authority. Notice the dot before ./vars.

    . ./vars
    ./clean-all
    ./build-ca
    ./build-dh
    

    You will have to confirm the values or change them if necessary.

    Now we have a certificate authority and we can create new certificates that will be signed by this authority.

    WARNING: be extremely careful with all key files, they should be kept private.

    I would recommend performing these commands:

    chown -R root:root /etc/openvpn 
    chmod -R 700 /etc/openvpn
    

    By default, OpenVPN runs as root. With these commands, only the root user will be able to access the keys. If you don't run OpenVPN as root, you must select the appropriate user for the first command. See also this article.

    Creating the Server Certificate

    We create the server certificate:

    ./build-key-server server
    

    It's up to you to come up with an alternative for 'server'. This is the file name under which the key files and certificates are stored.

    All files that are generated can be found in the '/etc/openvpn/easy-rsa/keys' directory. This is just a flat folder with both the server and client keys.

    Creating the optional TLS-AUTH Certificate

    This step is optional but it doesn't take much effort and it seems to add an additional security layer at no significant cost. In this step we create an additional secret key that is shared with both the server and the clients.

    The following steps are based on this article (use of -tls-auth).

    cd /etc/openvpn/easy-rsa/keys
    openvpn --genkey --secret ta.key
    

    When we are going to create the server configuration, we will reference this key file.

    Creating the Client Certificate

    Now that we have a server certificate, we are going to create a certificate for our iPhone (or any other iOS device).

    ./build-key iphone
    

    Answer the questions with the defaults. Don't forget to answer these questions:

    Sign the certificate? [y/n]:y
    1 out of 1 certificate requests certified, commit? [y/n]y
    

    So now we have everything in place to start creating an OpenVPN configuration. We must create a configuration for the server and the client. Those configurations are based on the examples that can be find in /usr/share/doc/openvpn/examples/.

    Example Server configuration

    This is my server configuration which is operational. It is stored in /etc/openvpn/openvpn.conf

    dev tun2
    tls-server
    cipher AES-256-CBC
    auth SHA256
    remote-cert-tls client
    dh easy-rsa/keys/dh2048pem
    ca easy-rsa/keys/ca.crt
    cert easy-rsa/keys/server.crt
    key easy-rsa/keys/server.key
    tls-auth easy-rsa/keys/ta.key
    server 10.0.0.0 255.255.255.0
    log /var/log/openvpn.log
    script-security 2
    route-up "/sbin/ifconfig tun2 up"
    port 443
    proto tcp-server
    push "redirect-gateway def1 bypass-dhcp"
    push "dhcp-option DNS 8.8.8.8"
    

    I believe you should be able to use this configuration as-is. Depending on your local IP-addresses within your own network, you may have to change the server section.

    I use TCP-port 443 as this destination port is almost never blocked as blocking this port would break most internet connectivity. (The downside is that I can no longer host any secure web site on this IP-address).

    The OpenVPN service will provide your client with an IP-address within the address range configured in the 'server' section.

    Change any parameters if required and then start or restart the OpenVPN service:

    /etc/init.d/openvpn restart
    

    Make sure that the server is running properly in /var/log/openvpn.log

    If you want to use your VPN to browse the internet, we still need to configure a basic firewall setup.

    I'm assuming that you already have some kind of IPtables-based firewall running. Configuring a Linux firewall is out-of-scope for this article. I will only discuss the changes you may need to make for the OpenVPN service to operate properly.

    You will need to accept traffic to TCP port 443 on the interface connected to the internet.

    iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
    

    If your OpenVPN server is behind a router/firewall, you need to configure port-forwarding on that router/firewall. How to do so is out-of-scope for this article, as it is different for different devices.

    Assuming that you will - for example - use the 10.0.0.0/24 network for VPN clients such as your iPhone, you must also create a NAT rule so VPN clients can use the IP-address of the Linux server to access Internet.

    iptables -t nat -A POSTROUTING -s "10.0.0.0/24" -o "eth0" -j MASQUERADE
    

    Please note that you must change eth0 with the name of the appropriate interface that connects to the internet. Change the IP-address range according to your own situation. It should not conflict with your existing network.

    iptables -A FORWARD -p tcp -s 10.0.0.0/24 -d 0.0.0.0/0 -j ACCEPT
    

    Please note that I haven't tested these rules, as I have a different setup. But this should be sufficient. And make sure that forwarding is enabled like this:

    echo 1 > /proc/sys/net/ipv4/ip_forward
    

    Example Client configuration

    Most OpenVPN clients can automatically import files with the .ovpn file extension. A typical configuration file is something like 'iphone.ovpn'.

    Warning: the .ovpn files will contain the certificate used by your iPhone/iPad to authenticate against your OpenVPN server. Be very carefull where you store this file. Anyone that is able to obtain a copy of this file, will be able to connect to your VPN server.

    This is an example configuration file, but we are not going to create it by hand, it's too much work.

    What you will notice from this example is that the .ovpn file contains both the client configuration and all the required certificates:

    1. the CA root certificate
    2. the server certificate to validate the server
    3. the client private certificate
    4. the TLS-AUTH certificate (an optional extra security measure)

    Create a client configuration file (.ovpn) with a script

    You can create your client configuration file manually but that is a lot of work. Because you need to append all the certificates to a single file, that also contains the configuration settings.

    So we will use a script to setup the client configuration.

    First we are going to create a folder where our client configuration files will be stored.

    mkdir /etc/openvpn/clientconfig
    chmod 700 /etc/openvpn/clientconfig
    

    Now we will download the script and the accompanying configuration template file. Notice that the links may wrap.

    cd /etc/openvpn
    wget https://raw.githubusercontent.com/louwrentius/openvpntutorial/master/create-client-config.sh
    wget https://raw.githubusercontent.com/louwrentius/openvpntutorial/master/client-config-template
    chmod +x create-client-config.sh
    

    Please note that you first need to create the certificates for your devices before you can generate a configuration file. So please go back to that step if you need to.

    Also take note of the name you have used for your devices. You can always take a look in /etc/openvpn/easy-rsa/keys to see how your devices are called.

    Now, edit this client-config-template and change the appropriate values where required. You may probably only need to change the first line:

    remote <your server DNS address or IP address>
    

    Now you are ready to run the script and generate the config file for your device.

    When you run this script, a configuration file is generated and placed in to the folder /etc/openvpn/clientconfig.

    The script just puts the client configuration template and all required certificates in one file. This is how you use it:

    ./create-client-config.sh iPhone
    

    Some output you will notice when running the script:

    user@server:/etc/openvpn# ./create-client-config.sh iphone
    Client's cert found: /etc/openvpn/easy-rsa/keys/iphone
    Client's Private Key found: /etc/openvpn/easy-rsa/keys/iphone.key
    CA public Key found: /etc/openvpn/easy-rsa/keys/ca.crt
    tls-auth Private Key found: /etc/openvpn/easy-rsa/keys/ta.key
    Done! /etc/openvpn/clientconfig/iphone.ovpn Successfully Created.
    

    You should now find a file called 'iphone.ovpn' in the directory /etc/openvpn/clientconfig.

    We are almost there. We just need to copy this file to your iOS device.

    You have three options:

    1. Use iCloud Drive
    2. Use iTunes
    3. Use email (obviously insecure and not discussed)

    Setting up your iPhone or iPad with iCloud Drive

    1. First install the OpenVPN Connect application if you haven't done so.
    2. Copy the .ovpn file from your OpenVPN server to your iCloud Drive.
    3. Take your device and use the 'files' browser to navigate within your iCloud drive to the .ovpn file you just copied.
    4. Tap on the file to download and open it.
    5. Now comes the tricky part: press the share symbol

    step 1

    Open the file with the OpenVPN application on your iOS device:

    step 2

    step 3

    When you get the question "OpenVPN would like to Add VPN Configurations", choose 'Allow'.

    Continue with the step 'Test your iOS device'.

    If the OpenVPN Connect client doesn't import the file, remove the application from the device and re-install it. (This is what I had to do on my iPad).

    Setting up your iPhone or iPad with iTunes

    You can skip this step if you used iCloud Drive to copy the .ovpn profile to your device.

    You need to get the following files on your iOS device:

    iphone.ovpn
    

    Copy this file from your OpenVPN server to the computer running iTunes. Then connect your device to iTunes with a cable.

    1. Open iTunes
    2. Select your device at the top right
    3. Go to the Apps tab
    4. Scroll to the file sharing section
    5. Select the OpenVPN application
    6. Add the iphone.ovpn
    7. Sync your device

    Test your iOS device

    Open the OpenVPN client. You will see a notice that a new configuration has been imported and you need to accept this configuration.

    As it might not work straight away, you need to monitor /var/log/openvpn.log on the server to watch for any errors.

    Now try to connect and enjoy.

    Conclusion

    You should be able to keep your VPN enabled at all times because battery usage overhead should be minimal. If you are unable to connect to your VPN when you are at home behind your own firewall, you need to check your firewall settings.

    Updated 20130123 with keepalive option. Updated 20130801 with extra server push options for traffic redirection and DNS configuration Updated 20180618 as substantial rewrite of the original outdated article.

  3. HP Proliant Microserver Gen10 as Router or NAS

    September 14, 2017

    Introduction

    In the summer of 2017, HP released the Proliant Microserver Gen10. This machine replaces the older Gen8 model.

    gen10

    For hobbyists, the Microserver always has been an interesting device for a custom home NAS build or as a router.

    Let's find out if this is still the case.

    Price

    In The Netherlands, the price of the entry-level model is similar to the Gen8: around €220 including taxes.

    CPU

    The new AMD X3216 processor has slightly better single threaded performance as compared to the older G1610t in the Gen8. Overall, both devices seem to have similar CPU performance.

    The biggest difference is the TDP: 35 Watt for the Celeron vs 15 Watt for the AMD CPU.

    Memory

    By default, it has 8 GB of unbuffered ECC memory, that's 4 GB more than the old model. Only one of the two memory slots is occupied, so you can double that amount just by adding another 8 GB stick. It seems that 32 GB is the maximum.

    Storage

    This machine has retained the four 3.5" drive slots. There are no drive brackets anymore. Before inserting a hard drive, you need to remove a bunch of screws from the front of the chassis and put four of them in the mounting holes of each drive. These screws then guide the drive through grooves into the drive slot. This caddy-less design works perfectly and the drive is mounted rock-solid in it's position.

    To pop a drive out, you have to press the appropriate blue lever, which latches on to one of the front screws mounted on your drive and pulls it out of the slot.

    There are two on-board sata controllers.

    00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 49)
    01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11)
    

    The Marvell controller is connected to the four drive bays. The AMD controller is probably connected to the fifth on-board SATA port.

    As with the Gen8, you need a floppy-power-connector-to-sata-power-connector cable if you want to use a SATA drive with the fifth onboard SATA port.

    Due to the internal SATA header or the USB2.0 header, you could decide to run the OS without redundancy and use all four drive bays for storage. As solid state drives tend to be very reliable, you may use a small SSD to keep the cost and power usage down and still retain reliability (although not the level of reliability RAID1 provides).

    Networking

    Just as the Gen8, the Gen10 has two Gigabit network cards. The brand and model is: Broadcom Limited NetXtreme BCM5720

    As tested with iperf3 I get full 1 Gbit network performance. No problems here (tested on CentOS 7).

    PCIe slots

    This model has two half-height PCIe slots (1x and 8x in a 4x and 8x physical slot) which is an improvement over the single PCIe slot in the Gen8.

    USB

    The USB configuration is similar to the Gen8, with both USB2 and USB3 ports and one internal USB2 header on the motherboard.

    Sidenote: the onboard micro SD card slot as found in the Gen8 is not present in the Gen10.

    Graphics

    The Gen10 has also a GPU build-in but I have not looked into it as I have no use for it.

    The Gen10 differs in output options as compared to the Gen8: it supports one VGA and two displayport connections. Those displayport connectors could make the Gen10 an interesting DIY HTPC build, but I have not looked into it.

    iLO

    The Gen10 has no support for iLO. So no remote management, unless you have an external KVM-over-IP solution.

    This is a downside, but for home users, this is probably not a big deal. My old Microserver N40L didn't have iLO and it never bothered me.

    And most of all: iLO is a small on-board mini-comuter that increases idle power consumption. So the lack of iLO support should mean better idle power consumption.

    Boot

    Both Legacy and UEFI boot is supported. I have not tried UEFI booting.

    Booting from the 5th internal SATA header is supported and works fine (as opposed to the Gen8).

    For those who care: booting is a lot quicker as opposed to the Gen8, which took ages to get through the BIOS.

    Power Usage

    I have updated this segment as I have used some incorrect information in the original article.

    The Gen10 seems to consume 14 Watt at idle, booted into Centos 7 without any disk drives attached (removed all drives after booting). This 14 Watt figure is reported by my external power meter.

    Adding a single old 7200 1 TB drive drives power usage up to 21 Watt (as expected).

    With four older 7200 RPM drives the entire system uses about 43 Watt according to the external power meter.

    As an experiment, I've put two old 60 GB 2.5" laptop drives in the first two slots, configured as RAID1. Then I added two 1 TB 7200 RPM drives to fill up the remaining slots. This resulted in a power usage of 32 Watt.

    Dimensions and exterior

    Exactly the same as the Gen8, they stack perfectly.

    The Gen8 had a front door protecting the drive bays connected to the chassis with two hinges. HP has been cheap on the Gen10, so when you open the door, it basically falls off, there's no hinge. It's not a big issue, the overall build quality of the Gen10 is excellent.

    I have no objective measurements of noise levels, but the device seems almost silent to me.

    Evaluation and conclusion

    At first, I was a bit disappointed about the lack of iLO, but it turned out for the best. What makes the Gen10 so interesting is the idle power consumption. The lack of iLO support probably contributes to the improved idle power consumption.

    The Gen8 measures between 30 and 35 Watt idle power consumption, so the Gen10 does fare much better (~18 Watt).

    Firewall/Router

    At this level of power consumption, the Gen10 could be a formidable router/firewall solution. The only real downside is it's size as compared to purpose-built firewalls/routers. The two network interfaces may provide sufficient network connectivity but if you need more ports and using VLANs is not enough, it's easy to add some extra ports.

    If an ancient N40L with a piss-poor Atom processor can handle a 500 Mbit internet connection, this device will have no problems with it, I'd presume. Once I've taken this device into production as a replacement for my existing router/firewall, I will share my experience.

    Storage / NAS

    The Gen8 and Gen10 both have four SATA drive bays and a fifth internal SATA header. From this perspective, nothing has changed. The reduced idle power consumption could make the Gen10 an even more attractive option for a DIY home grown NAS.

    All things considered I think the Gen10 is a great device and I have not really encountered any downsides. If you have no problems putting a bit of effort into a DIY solution, the Gen10 is a great platform for a NAS or Router/Firewall, that can compete with most purpose-build devices.

    I may update this article as I gain more experience with this device.

    Tagged as : Storage Networking

Page 1 / 63