<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Louwrentius</title><link href="https://louwrentius.com/" rel="alternate"></link><link href="https://louwrentius.com/feed/all.atom.xml" rel="self"></link><id>https://louwrentius.com/</id><updated>2023-11-07T12:00:00+01:00</updated><entry><title>Tunneling Elixir cluster network traffic over Wireguard</title><link href="https://louwrentius.com/tunneling-elixir-cluster-network-traffic-over-wireguard.html" rel="alternate"></link><published>2023-11-07T12:00:00+01:00</published><updated>2023-11-07T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2023-11-07:/tunneling-elixir-cluster-network-traffic-over-wireguard.html</id><summary type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;The other day I was supporting a customer with an Elixir-based platform that would make use of Elixir &lt;a href="https://github.com/bitwalker/libcluster"&gt;libcluster&lt;/a&gt;, so messages on one host can be passed to other hosts. This can - for example - enable live updates for all users, even if they are not communicating with the same …&lt;/p&gt;</summary><content type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;The other day I was supporting a customer with an Elixir-based platform that would make use of Elixir &lt;a href="https://github.com/bitwalker/libcluster"&gt;libcluster&lt;/a&gt;, so messages on one host can be passed to other hosts. This can - for example - enable live updates for all users, even if they are not communicating with the same application server.&lt;/p&gt;
&lt;h3&gt;Encryption&lt;/h3&gt;
&lt;p&gt;Elixir's libcluster does support encrypted communication using TLS certificates however I was struggling with the help of an application developer to make it work.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&amp;quot;severity&amp;quot;:&amp;quot;warn&amp;quot;,&amp;quot;message&amp;quot;:&amp;quot;[libcluster:example] unable to connect to :\&amp;quot;app@Host-B\&amp;quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I'm absolutely open to the idea that we did something wrong and certificate-based encryption will work, but we were time-constrained and we decided to opt for another solution that seemed simpler and easier to maintain.&lt;/p&gt;
&lt;h3&gt;Wireguard as the encrypted transport&lt;/h3&gt;
&lt;p&gt;I deployed a Wireguard mesh network between all application servers using Ansible, which was straight forward. We just provisioned all hosts into the /etc/hosts file to keep things simple.&lt;/p&gt;
&lt;p&gt;In the table below, we show a simplified example of the setup.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Hostname&lt;/th&gt;
&lt;th&gt;IP-address&lt;/th&gt;
&lt;th&gt;Wireguard Hostname&lt;/th&gt;
&lt;th&gt;Wireguard IP-address&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Host-A&lt;/td&gt;
&lt;td&gt;10.0.10.123&lt;/td&gt;
&lt;td&gt;Host-A-wg&lt;/td&gt;
&lt;td&gt;192.168.0.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Host-B&lt;/td&gt;
&lt;td&gt;10.0.11.231&lt;/td&gt;
&lt;td&gt;Host-B-wg&lt;/td&gt;
&lt;td&gt;192.168.0.2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The Elixir applications would only know about the Host-A|B-wg hostnames and thus communicate over the encrypted VPN tunnel.&lt;/p&gt;
&lt;h3&gt;The problem with wireguard and libcluster&lt;/h3&gt;
&lt;p&gt;The key issue with libcluster is that when Host-A connects to Host-B, it uses the DNS hostname Host-B-wg hostname. But the actual hostname of Host-B is - you guess it: 'Host-B'. This means there is a mismatch and for reasons unknown to me, the libcluster connection will fail.&lt;/p&gt;
&lt;p&gt;So the target hostname as configured in libcluster must match the hostname of the actual host! Since libcluster seems to make usage of domain names mandatatory, using IP-addresses was not an option. &lt;/p&gt;
&lt;p&gt;If we would point Host-B to it's Wireguard IP-address (192.168.0.2), the problem would be solved. However, in that case, Wireguard doesn't know about the external 10.0.11.231 IP address and also tries to connect to the non-existing 192.168.0.2 address. So the Wireguard tunnel would never be created.&lt;/p&gt;
&lt;h3&gt;The solution&lt;/h3&gt;
&lt;p&gt;The solution is not that elegant, but it works. We still point the Host-B domain name to the wireguard IP address of 192.168.0.2 but we create an additional DNS record specifically for Wireguard, pointing to 10.0.1.231, so it can setup the VPN tunnel.&lt;/p&gt;
&lt;p&gt;This is what /etc/hosts looks like on Host-A:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="mf"&gt;10.0.10.123&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Host&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;
&lt;span class="mf"&gt;192.168.0.2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Host&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;B&lt;/span&gt;
&lt;span class="mf"&gt;10.0.11.231&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Host&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;B&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;wg&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And this is what /etc/hosts looks like on Host-B:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="mf"&gt;10.0.11.231&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Host&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;B&lt;/span&gt;
&lt;span class="mf"&gt;192.168.0.1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Host&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;
&lt;span class="mf"&gt;10.0.10.123&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Host&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;wg&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Evaluation&lt;/h3&gt;
&lt;p&gt;Although all choices are a tradeoff, for us, the Wireguard-based solution makes most sense. Especially now that we have an encrypted tunnel between all hosts and any future communication between hosts can thus be encrypted without any additional effort.&lt;/p&gt;</content><category term="Development"></category><category term="Uncategorized"></category></entry><entry><title>IKEA $50 VINDSTYRKA vs. $290 Dylos air quality monitor</title><link href="https://louwrentius.com/ikea-50-vindstyrka-vs-290-dylos-air-quality-monitor.html" rel="alternate"></link><published>2023-09-17T12:00:00+02:00</published><updated>2023-09-17T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2023-09-17:/ikea-50-vindstyrka-vs-290-dylos-air-quality-monitor.html</id><summary type="html">&lt;p&gt;This is a brief article in which I compare the IKEA VINDSTYRKA $50 air quality monitor (PM2.5) with a $290 air quality monitor made by Dylos to see if it's any good.&lt;/p&gt;
&lt;h3&gt;Context&lt;/h3&gt;
&lt;p&gt;If you care about indoor air quality, you may already own a CO2 to determine if …&lt;/p&gt;</summary><content type="html">&lt;p&gt;This is a brief article in which I compare the IKEA VINDSTYRKA $50 air quality monitor (PM2.5) with a $290 air quality monitor made by Dylos to see if it's any good.&lt;/p&gt;
&lt;h3&gt;Context&lt;/h3&gt;
&lt;p&gt;If you care about indoor air quality, you may already own a CO2 to determine if it's time to ventilate your space a bit&lt;sup id="fnref:auto"&gt;&lt;a class="footnote-ref" href="#fn:auto"&gt;1&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;But a CO2 monitor doesn't tell you anything about the amount and size of &lt;a href="https://www.epa.gov/pm-pollution/particulate-matter-pm-basics"&gt;particulate matter&lt;/a&gt; in the air.&lt;/p&gt;
&lt;p&gt;Of particular interest are very fine particles, in the "&lt;a href="https://en.wikipedia.org/wiki/Particulates#Size,_shape,_and_solubility_matter"&gt;PM2.5&lt;/a&gt;" category. Those particles are 2.5 micrometers or smaller in diameter can embed themselves deep inside the lungs and cause health issues.&lt;/p&gt;
&lt;p&gt;Both air quality monitoring devices are specifically measuring PM2.5 particulate matter, so that's what we will focus on in this test.&lt;/p&gt;
&lt;h3&gt;DYLOS DC1100 PRO&lt;/h3&gt;
&lt;p&gt;I bought a &lt;a href="http://www.dylosproducts.com/dcproairqumo.html"&gt;Dylos DC1100 Pro&lt;/a&gt; in 2014 as I was quite interesting in the topic of air quality at that time. As I had to import the device, I believe I had to pay around 400 Euros for it but it's now for sale in the US for around $290.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/airquality/dylos02.jpg"&gt;&lt;img alt="Dylos DC1100 Pro" src="https://louwrentius.com/static/images/airquality/dylos01.jpg" /&gt;&lt;/a&gt;
&lt;em&gt;click on the image for a picture of the back&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I specifically chose this model because it has a serial port, which allows me to log data and maybe spot some trends. I was thinking about using this data to control my air circulation system in my home, but I never got around to building this.&lt;/p&gt;
&lt;p&gt;This device (without serial port) is also explored &lt;a href="https://woodgears.ca/dust/dylos.html"&gt;in-depth&lt;/a&gt; by Matthias Wandel, who many of you probably know from his 1.7M subscribers &lt;a href="https://www.youtube.com/@Matthiaswandel"&gt;Youtube channel&lt;/a&gt;. &lt;em&gt;Tip: he shows the inside of the device.&lt;/em&gt;&lt;/p&gt;
&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/v25owuUboxI?si=E_-cRY4fp0boG29r&amp;amp;start=60" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen&gt;&lt;/iframe&gt;

&lt;p&gt;Note that this video is from 10 years ago and I find it remarkable that the Dylos 1100 Pro is still sold - seemingly unmodified - over the years.&lt;/p&gt;
&lt;h3&gt;The IKEA VINDSTYRKA&lt;/h3&gt;
&lt;p&gt;Recently, I discovered that IKEA is now selling the VINDSTYRKA air quality monitor with support for Zigbee. The product is intended to be used with IKEAs range of air purifiers, to better finetune the behaviour of those devices.&lt;/p&gt;
&lt;p&gt;&lt;img alt="vindstyrka" src="https://louwrentius.com/static/images/airquality/vindstyrka.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The device measures PM2.5 particulate matter and also monitors temperature and humidity. All data is exposed over Zigbee. I've not tested this myself but I wonder how long it would last on a battery bank as it's USB powered. &lt;/p&gt;
&lt;p&gt;Due to the low price tag, I decided to compare this $50 device (€40) with my Dylos. I think it's quite an interesting device because the Zigbee support allows you to 
integrate the device in home automation and log data, if you have a need for that.&lt;/p&gt;
&lt;h3&gt;Data logging setup&lt;/h3&gt;
&lt;p&gt;De Dylos device is a bit of a pain, because the measurement values are in particulates per square foot, so I had to find a proper conversion formula, which I found in &lt;a href="https://www.scapeler.com/wp-content/uploads/2019/12/Project%20Report%20VISIBILIS_Final%20Version%201.2-compact.pdf"&gt;this paper (page 17)&lt;/a&gt;. The formula is: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;PM2.5 Dylos DC1100 (μg/m3) = (particles &amp;gt; 0.5 μm minus particles &amp;gt; 2.5 μm)/250.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;A Raspberry Pi 3B+ is running a &lt;a href="https://github.com/louwrentius/airquality/tree/master"&gt;Python script&lt;/a&gt; that reads the data from the serial port, converts it to PM2.5 values using the previous mentioned formula and transmits it into an InfluxDB + Grafana server.&lt;/p&gt;
&lt;p&gt;To log the VINDSTYRKA data, I used a Sonoff Zigbee receiver on a Raspberry Pi 4b+. I installed zigbee2mqtt as a docker container, Mosquitto MQTT server and Telegraf+MQTT-client to submit the data into InfluxDB, which sounds more convoluted than it actually was.&lt;/p&gt;
&lt;h3&gt;Test method&lt;/h3&gt;
&lt;p&gt;I just let both devices run for a few days in close proximity to each other in my living room. I kept a balcony door open 24/7. I also created a bit of smoke at some point just to observe how the devices would respond and how much they would deviate from each other. Nothing too scientific, to be frank.&lt;/p&gt;
&lt;h3&gt;Test result&lt;/h3&gt;
&lt;p&gt;I've plotted the data from the Dylos and the Ikea device in the same graph and I think the results are quite straightforward. The peak in the middle was my 'smoke test'. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Original&lt;/strong&gt;
&lt;a href="https://louwrentius.com/static/images/airquality/grafana02.png"&gt;&lt;img alt="grafana graph" src="https://louwrentius.com/static/images/airquality/grafana01.png" /&gt;&lt;/a&gt;
&lt;em&gt;click on the image for a larger version&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Updated&lt;/strong&gt;
&lt;a href="https://louwrentius.com/static/images/airquality/grafana04.png"&gt;&lt;img alt="grafana graph" src="https://louwrentius.com/static/images/airquality/grafana03.png" /&gt;&lt;/a&gt;
&lt;em&gt;click on the image for a larger version&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;After a few days I noticed a clear deviation between the VINDSTYRKA and the Dylos DC1100 Pro at certain time intervals. I have no real explanation for this deviation and I can't tell which device shows 'correct' data.&lt;/p&gt;
&lt;p&gt;If I try to follow the &lt;a href="https://www.aqi.in/dashboard/Netherlands/noord-holland/Haarlem"&gt;AQI PM2.5 values&lt;/a&gt; for my city, the VINDSTYRKA seems to under report and the Dylos seems to over-report PM2.5 particulate matter. &lt;/p&gt;
&lt;h3&gt;Evaluation&lt;/h3&gt;
&lt;p&gt;Based on my test, I think the VINDSTYRKA is &lt;em&gt;good enough&lt;/em&gt;, looking at how closely the measurements track the results of the Dylos.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update September 19th 2023&lt;/strong&gt;
Based on the new graph data, it seems the Dylos and VINDSTYRKA are less in agreement over absolute PM2.5 values. I'm not sure what to make of it. &lt;/p&gt;
&lt;p&gt;As both devices still seem to agree on basic trend data, I would say that they still operate in the same ballpark.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:auto"&gt;
&lt;p&gt;Maybe you are a home automation enthusiast and you've managed to automate this process.&amp;#160;&lt;a class="footnote-backref" href="#fnref:auto" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Hardware"></category><category term="Solar"></category></entry><entry><title>My solar-powered blog is now on Lithium Iron Phosphate</title><link href="https://louwrentius.com/my-solar-powered-blog-is-now-on-lithium-iron-phosphate.html" rel="alternate"></link><published>2023-05-19T12:00:00+02:00</published><updated>2023-05-19T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2023-05-19:/my-solar-powered-blog-is-now-on-lithium-iron-phosphate.html</id><summary type="html">&lt;p&gt;In my &lt;a href="https://louwrentius.com/i-made-my-blog-solar-powered-then-things-escalated.html"&gt;last blog post&lt;/a&gt; I discussed how a small solar project - to power this blog on a Raspberry Pi - escalated into a full-blown off-grid solar setup, large enough to power the computer I use at the moment to write this update&lt;sup id="fnref:hn"&gt;&lt;a class="footnote-ref" href="#fn:hn"&gt;1&lt;/a&gt;&lt;/sup&gt;. In this update, I want to discuss …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In my &lt;a href="https://louwrentius.com/i-made-my-blog-solar-powered-then-things-escalated.html"&gt;last blog post&lt;/a&gt; I discussed how a small solar project - to power this blog on a Raspberry Pi - escalated into a full-blown off-grid solar setup, large enough to power the computer I use at the moment to write this update&lt;sup id="fnref:hn"&gt;&lt;a class="footnote-ref" href="#fn:hn"&gt;1&lt;/a&gt;&lt;/sup&gt;. In this update, I want to discuss my battery upgrade.&lt;/p&gt;
&lt;p&gt;For me, the huge lead acid battery (as pictured below) was always a relatively cheap temporary solution. &lt;/p&gt;
&lt;p&gt;&lt;img alt="solar contraption" src="https://louwrentius.com/static/images/solarupdate/solarupdate04.jpg" /&gt;
&lt;em&gt;A 12 Volt 230 Ah lead-acid battery&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Lead-acid batteries are not ideal for solar setups &lt;a href="https://louwrentius.com/a-practical-understanding-of-lead-acid-batteries.html"&gt;for multiple reasons&lt;/a&gt;, but the most problematic issue is the slow charging speed. I only have a few hours of direct sunlight per day due to my particular situation and the battery just could not absorb sunlight fast enough.&lt;/p&gt;
&lt;p&gt;For the last 5-7 years, the go-to battery chemistry for solar is LiFePO4 or lithium iron phosphate as a replacement for lead-acid batteries. This battery chemistry is not as energy-dense as Lithium-ion, but the upside is price and safety. In particular, LiFePO4 cells aren't as volatile as Lithium-ion cells. They may start outgassing, but they don't start a fire.&lt;/p&gt;
&lt;p&gt;More importantly for my situation: LiFePO4 batteries can charge and discharge at much higer rates than lead-acid batteries&lt;sup id="fnref:contextbat"&gt;&lt;a class="footnote-ref" href="#fn:contextbat"&gt;2&lt;/a&gt;&lt;/sup&gt;. It's possible to charge LiFePO4 cells with a C-rate of 1! This means that if a battery is rated for 100Ah (Ampere-hours) you can charge with a current of 100 Ampere! My solar setup will never come even close to that number, but at least it's good to have some headroom.&lt;/p&gt;
&lt;p&gt;&lt;img alt="lithium cell" src="https://louwrentius.com/static/images/solarupdate/solarupdate08.jpg" /&gt;
&lt;em&gt;A single 3.2 volt 230Ah Lithium Iron Phosphate prismatic cell&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I did contemplate buying an off-the-shelf battery but I decided against it. You have no control over the brand and quality of the LiFePO4 cells they use and more importantly, what's the fun in that anyway?&lt;/p&gt;
&lt;p&gt;So I decided to order my own cells and build my own 12 Volt LiFePO4 battery consisting of four cells in series (4S) as my existing system is also based on 12 Volt. Other common configurations are 8S (24 Volt) and 16S (48 Volt&lt;sup id="fnref:volt"&gt;&lt;a class="footnote-ref" href="#fn:volt"&gt;3&lt;/a&gt;&lt;/sup&gt;). &lt;/p&gt;
&lt;p&gt;&lt;img alt="box with 4 cells" src="https://louwrentius.com/static/images/solarupdate/solarupdate09.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;It turned out that I could just buy my cells locally in The Netherlands (instead of China) because of &lt;a href="https://www.nkon.nl"&gt;a company&lt;/a&gt; that specializes in batteries (no affiliate). As the price was right, I bought effectively 3 KWh for just shy of 500 Euros.&lt;/p&gt;
&lt;p&gt;I decided to buy B-grade cells as those are cheaper than A(utomotive)-grade cells. I might have gone for A-grade cells as not to risk anything if I would build a more serious battery bank for my whole home. Yet a lot of people report no significant differences between A-grade and B-grade LiFePO4 cells for solar battery banks so in the end, it's all about your particular apetite for risk.&lt;/p&gt;
&lt;p&gt;Just buying cells and putting them in series (in my case 4S) is not enough, a BMS or battery management system is needed, which you put in series with the battery on the negative terminal. I ordered a 100A Daly BMS from China which works fine. I'm even able to use Python to talk with the Daly BMS over bluetooth to extract data (voltages, current, State of Charge and so on).&lt;/p&gt;
&lt;p&gt;&lt;img alt="Daly BMS" src="https://louwrentius.com/static/images/solarupdate/dalybms.png" /&gt;&lt;/p&gt;
&lt;p&gt;The BMS is critical because it protects the cells against deep discharge and overcharging. In addition, the BMS tries to keep the voltage of the cells as equal as possible, which is called 'balancing'. Charging stops entirely when just one of the cells reach their maximum voltage. If other cells have a much lower voltage, it means that they can still be charged but the one cell with the high voltage is blocking them from doing so. That's why cell balancing is critical if you want to use as much of the capacity as possible. &lt;/p&gt;
&lt;p&gt;The Daly BMS is quite bad at cell balancing so I've ordered a separate cell balancer for $18 to improve cell balancing (yet to be installed).&lt;/p&gt;
&lt;p&gt;&lt;img alt="my battery build" src="https://louwrentius.com/static/images/solarupdate/solarupdate10.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="ikea box" src="https://louwrentius.com/static/images/solarupdate/solarupdate11.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Ikea sells a Kuggis 32cmx32cmx32cm storage box that seems to be perfect for my small battery. As it has two holes on the sides, I just routed the positive and negative cables through them.&lt;/p&gt;
&lt;p&gt;Now that I've put this battery in place I've seen a huge improvement regarding solar charge performance.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Grafana Chart" src="https://louwrentius.com/static/images/solarupdate/solarupdate12.png" /&gt;&lt;/p&gt;
&lt;p&gt;I've actually potentially created a new problem: my solar charge controller can only handle about 400 Watts of solar power at 12V and my setup is quite close to reaching this output. I may have undersized my solar charge controller and it has come back to bite me. For now, I'm going to just observe: if that peak of 400 Watts is only reached for a brief time - as it is right now - I don't think I'm going to upgrade my solar charge controller as that would not be worth it.  &lt;/p&gt;
&lt;p&gt;As we are still in May, my best yield is 1.2 KWh per day. Although that's paltry as compared to regular residential solar setups, that 1.2 KWh is more than a third of my battery capacity and can run my computer setup for 10 hours, so for me it's good enough.&lt;/p&gt;
&lt;p&gt;It's funny to me that all of this started out with just a 60 Watt solar panel, a 20 Euro solar charge controller (non MPPT) and a few 12V 7Ah lead acid gel batteries in parallel.&lt;/p&gt;
&lt;p&gt;I think it's beyond amazing that you can now build a 15KWh battery bank complete with BMS for less than €3000. For that amount of money, you can't come even close to this kind of capacity.&lt;/p&gt;
&lt;p&gt;For context, it's also good to know that the longevity of LiFePO4 cells is amazing. A-grade cells are rated for 6000 cycles ( 16+ years at one cycle per day ) and my vendor rated B-grade cells at 4000 cycles (~11 years).&lt;/p&gt;
&lt;p&gt;Maybe my battery build may inspire you to explore building your own battery. LiFePO4 cells come in a whole range of capacities, I've seen small 22Ah cells or huge 304Ah cells so you can select something that fits your need and budget. &lt;/p&gt;
&lt;p&gt;If you're looking for more information: there are quite a few Youtubers that specialise in building large battery banks (48 Volt, 300Ah, ~15KWh) to power their homes and garages.&lt;/p&gt;
&lt;p&gt;Although &lt;a href="https://www.youtube.com/@WillProwse"&gt;Will Prowse&lt;/a&gt; reviewed LiFePO4 cells in the past, he currently focusses mostly on off-the-shelf products, like "rack-mount" batteries and inverter/chargers. &lt;/p&gt;
&lt;p&gt;I also like the &lt;a href="https://www.youtube.com/@OffGridGarageAustralia"&gt;off-grid-garage&lt;/a&gt; channel a lot, the channel as tested and explored quite a few products.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.youtube.com/channel/UCx6q0LJh5DrnYb9a30RMsWQ"&gt;Harrold Halewijn&lt;/a&gt; (Dutch) also has quite a few videos about solar setups in general and solar battery setups. He's really into automation, in combination with flexible (next-day) energy prices.&lt;/p&gt;
&lt;p&gt;Also in Dutch, a &lt;a href="https://tweakers.net/reviews/11086/5/vijf-tweakers-over-hun-zelfbouwthuisaccu-motivaties-kosten-problemen-en-tips-tips-en-toekomst.html"&gt;cool article&lt;/a&gt; about some people building their own large-scale home storage batteries (15KWh+)&lt;/p&gt;
&lt;p&gt;Another Dutch person build a &lt;a href="https://gathering.tweakers.net/forum/list_message/75351774#75351774"&gt;solar power factory&lt;/a&gt; with a battery capacity of 128 KWh for professional energy production. Truely amazing.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://news.ycombinator.com/item?id=36000824"&gt;Hacker News thread&lt;/a&gt; about this article.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:hn"&gt;
&lt;p&gt;https://news.ycombinator.com/item?id=35596959#35597492&amp;#160;&lt;a class="footnote-backref" href="#fnref:hn" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:contextbat"&gt;
&lt;p&gt;To fully charge a lead-acid battery, the charging process spends a lot of time in the contstant-voltage phase, the voltage is kept constant so as the battery charges further, the charging current goes down, so the charge process slows down. More info can be found &lt;a href="https://batteryuniversity.com/article/bu-403-charging-lead-acid#:~:text=With%20the%20CCCV%20method%2C%20lead,and%20%5B3%5D%20float%20charge."&gt;here&lt;/a&gt;&amp;#160;&lt;a class="footnote-backref" href="#fnref:contextbat" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:volt"&gt;
&lt;p&gt;It seems to me that most batteries build for home energy storage systems are standardising on 48 volt.&amp;#160;&lt;a class="footnote-backref" href="#fnref:volt" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="Solar"></category></entry><entry><title>I made my blog solar-powered, then things escalated</title><link href="https://louwrentius.com/i-made-my-blog-solar-powered-then-things-escalated.html" rel="alternate"></link><published>2023-04-17T12:00:00+02:00</published><updated>2023-04-17T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2023-04-17:/i-made-my-blog-solar-powered-then-things-escalated.html</id><summary type="html">&lt;p&gt;In 2020 I wondered if &lt;a href="https://louwrentius.com/this-blog-is-now-running-on-solar-power.html"&gt;I could run my blog on solar power&lt;/a&gt;, being inspired by &lt;a href="https://solar.lowtechmagazine.com/power.html"&gt;Low-tech Magazine&lt;/a&gt;, doing the same thing (but better)&lt;sup id="fnref:concept"&gt;&lt;a class="footnote-ref" href="#fn:concept"&gt;1&lt;/a&gt;&lt;/sup&gt;. The answer was 'yes', but only through spring and summer. &lt;/p&gt;
&lt;p&gt;I live in an apartment complex in The Netherlands and my balcony is facing west …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In 2020 I wondered if &lt;a href="https://louwrentius.com/this-blog-is-now-running-on-solar-power.html"&gt;I could run my blog on solar power&lt;/a&gt;, being inspired by &lt;a href="https://solar.lowtechmagazine.com/power.html"&gt;Low-tech Magazine&lt;/a&gt;, doing the same thing (but better)&lt;sup id="fnref:concept"&gt;&lt;a class="footnote-ref" href="#fn:concept"&gt;1&lt;/a&gt;&lt;/sup&gt;. The answer was 'yes', but only through spring and summer. &lt;/p&gt;
&lt;p&gt;I live in an apartment complex in The Netherlands and my balcony is facing west. This means it only receives direct sunlight from 16:00 onward during spring and summer. Most of the time, the panels only get &lt;em&gt;indirect&lt;/em&gt; sunlight and therefore generate just a tiny fraction of their rated performance. The key issue is not solar, but the west-facing balcony (it should ideally be facing south).&lt;/p&gt;
&lt;p&gt;&lt;img alt="solar panel" src="https://louwrentius.com/static/images/solarpanelbalcony-small.jpg" /&gt;
&lt;em&gt;original solar panel&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;It's fair to say that my experiment isn't rational because of the sub-optimal solar conditions. Yet, I'm unreasonably obsessed by solar power and I wanted to make it work, even if it didn't make sense from an economic or environmental perspective&lt;sup id="fnref:insane"&gt;&lt;a class="footnote-ref" href="#fn:insane"&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;When I wrote my blog about my solar-powered setup, I was already on my second iteration: I started out with just a 60 Watt panel and a cheap $20 solar controller&lt;sup id="fnref:donotbuy"&gt;&lt;a class="footnote-ref" href="#fn:donotbuy"&gt;3&lt;/a&gt;&lt;/sup&gt;. That didn't even come close to being sufficient, so I upgraded the solar controller and bought a second panel rated for 150 Watt, which is pictured above. With the 60 Watt and 150 Watt panels in parallel, it was still not enough to keep the batteries charged in the fall and winter, due to the west-facing balcony.&lt;/p&gt;
&lt;p&gt;A Raspberry Pi 4B+ consumes around ~3.5 Watt of power continuously. Although that sounds like a very light load, if you run it for 24 hours, it's equivalent to using 84 Watts continuously for one hour. That's like running two &lt;a href="https://www.blokker.nl/blokker-tafelventilator-bl-30002-30-cm---wit/2061834.html"&gt;40 Watt fans&lt;/a&gt; for one hour, it's not insignificant and it doesn't even account for battery charging losses.&lt;/p&gt;
&lt;p&gt;So 210 Watt of solar (receiving mostly indirect sunlight) still could not power my Raspberry Pi through the winter under my circumstances. Yet, in the summer, I had plenty of power available and had no problems charging my iPad and other devices.&lt;/p&gt;
&lt;p&gt;As my solar setup could not keep the batteries charged from October onward, I decided to do something radical. I bought a 370 Watt&lt;sup id="fnref:panel"&gt;&lt;a class="footnote-ref" href="#fn:panel"&gt;4&lt;/a&gt;&lt;/sup&gt; solar panel (1690 x 1029 mm) and build a frame made of aluminium tubing&lt;sup id="fnref:noexperience"&gt;&lt;a class="footnote-ref" href="#fn:noexperience"&gt;5&lt;/a&gt;&lt;/sup&gt;. Solar panels have become so cheap that the aluminium frame is more expensive than the panel.&lt;/p&gt;
&lt;p&gt;&lt;img alt="solar contraption" src="https://louwrentius.com/static/images/solarupdate/solarupdate01.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Even this 370 Watt panel was not enough during the gloomy, cloudy winter days. So I bought a second panel and build a second tube frame. Only with a 740 Watt rated solar panel setup was I able to power my Raspberry Pi through the winter&lt;sup id="fnref:cheat"&gt;&lt;a class="footnote-ref" href="#fn:cheat"&gt;6&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;I didn't create this over-powered setup just to power the Raspberry Pi during the winter. I knew that solar performed much better during spring and summer and I wanted to capture as much of that energy as possible. The real goal was to go beyond powering the Pi and power my computer desk, which includes an Intel Mac Mini, two 1440p 27" displays and some other components (using around 100 Watt on average)&lt;sup id="fnref:bg"&gt;&lt;a class="footnote-ref" href="#fn:bg"&gt;7&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;I would not be able to power my desk 24/7 but I would be happy if I can work on solar power for a few hours every other day during spring and summer. I also wanted to light my house in the evening using this setup.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://louwrentius.com/this-blog-is-now-running-on-solar-power.html"&gt;original solar setup&lt;/a&gt; was enough to power the Raspberry Pi and charge an iPad in the spring/summer. The solar charge controller could not handle the increased solar capacity and needed replacement. So I decided to build a new setup inspired by &lt;a href="https://www.youtube.com/channel/UCoj6RxIAQq8kmJme-5dnN0Q"&gt;Will Prowse&lt;/a&gt; solar demo setups, which is pictured below:&lt;/p&gt;
&lt;p&gt;&lt;img alt="solar contraption" src="https://louwrentius.com/static/images/solarupdate/solarupdate02.jpg" /&gt;
&lt;em&gt;the latest iteration of my solar setup&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;First a brief disclaimer: I'm a hobbyist, not an expert (if you didn't notice already). I have no background in electrical systems. I've tried to make my setup safe, but I may have done things that are not recommended.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;My setup is a 12-volt system&lt;sup id="fnref:no12"&gt;&lt;a class="footnote-ref" href="#fn:no12"&gt;9&lt;/a&gt;&lt;/sup&gt;. The drawback of a 12-volt system is the relatively large currents required to charge the battery and power the inverter. This requires thicker, more expensive cabling to prevent energy losses in the cabling&lt;sup id="fnref:risk"&gt;&lt;a class="footnote-ref" href="#fn:risk"&gt;8&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;Most components are self-explanatory, except for the shunt. This device precisely measures battery voltage and how much current is going in and out of the battery. The solar charge controller and the shunt are linked together in a bluetooth network, so the solar controller uses the precise voltage and current information from the shunt to regulate the battery charging process.&lt;/p&gt;
&lt;p&gt;The solar controller, inverter and shunt have VE.direct interfaces (Victron-specific) which I use to collect data. I'm using a Python VE.direct module to gather this data, which just works without any issues. My Python script dumps the data into InfluxDB and I use Grafana for graphs (see below). The script also updates the 'solar status' bar to the right (or bottom for mobile users).&lt;/p&gt;
&lt;p&gt;&lt;img alt="grafana" src="https://louwrentius.com/solar/solar.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;this image is updated periodically&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The LCD display is just for fun, and mostly to keep an eye on the battery charge state.&lt;/p&gt;
&lt;p&gt;&lt;img alt="solar contraption" src="https://louwrentius.com/static/images/solarupdate/solarupdate03.jpg" /&gt;
&lt;em&gt;the 20x4 LCD screen&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The LCD screen is managed by the same python script that dumps the VE.direct data into Grafana. I focus on two metrics in particular. First of all the daily solar yield as a percentage: 100% means the load has been compensated by solar and anything higher means an energy 'profit'. In the bottom right we see the charger status (Bulk): if it's on 'Float' the battery is full. I tend to wait for the battery to recharge to a 'float' status before I use the inverter again.&lt;/p&gt;
&lt;p&gt;Let's talk about the battery. I've chosen to use a large &lt;em&gt;used&lt;/em&gt; &lt;a href="https://louwrentius.com/a-practical-understanding-of-lead-acid-batteries.html"&gt;lead-acid battery&lt;/a&gt; even though Lithium (LiFePO4) batteries &lt;a href="https://www.youtube.com/watch?v=Rp8Hspi4BC4"&gt;beat lead-acid&lt;/a&gt; in every metric.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;em&gt;Update May 2023:&lt;/em&gt; I have since upgraded to Lithium Iron Phosphate, see &lt;a href="https://louwrentius.com/my-solar-powered-blog-is-now-on-lithium-iron-phosphate.html"&gt;this blogpost&lt;/a&gt; for more information.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;&lt;img alt="solar contraption" src="https://louwrentius.com/static/images/solarupdate/solarupdate04.jpg" /&gt;
&lt;em&gt;A 12 Volt 230 Ah lead-acid battery&lt;sup id="fnref:cover"&gt;&lt;a class="footnote-ref" href="#fn:cover"&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I bought the battery&lt;sup id="fnref:la"&gt;&lt;a class="footnote-ref" href="#fn:la"&gt;11&lt;/a&gt;&lt;/sup&gt; second-hand for €100 so that's not a significant investment for a battery. Although it's a bit worn-down and the capacity is reduced, it is still good enough for me to run my computer setup for 10 hours after a full charge&lt;sup id="fnref:protection"&gt;&lt;a class="footnote-ref" href="#fn:protection"&gt;12&lt;/a&gt;&lt;/sup&gt;. In practice, I won't use the battery for more than four to five hours at a time because recharging can take multiple days and lead-acid batteries should ideally be fully recharged within 24 hours or their aging is accelerated. &lt;/p&gt;
&lt;p&gt;The lead-acid battery also serves another purpose: is a relatively cheap option for me to validate my setup. If it works as intended, I might opt to upgrade to lithium (LiFePO4) at some point.&lt;/p&gt;
&lt;p&gt;Until recently, switching between the grid and solar for my computer setup was quite cumbersome. I had to power down all equipment, connect to the inverter and power everything up again. That got old very quickly. Fortunately, I stumbled on an advertisement for a Victron Filax 2 and it turns out that it does exactly what I need. &lt;/p&gt;
&lt;p&gt;&lt;img alt="solar contraption" src="https://louwrentius.com/static/images/solarupdate/solarupdate06.png" /&gt;&lt;/p&gt;
&lt;p&gt;The Filax 2 switches between two 230 Volt input sources without any interruption, like a UPS (Uninterruptible power supply). Now that I've installed this device, I can switch between solar and grid power without any interruption. Brand new, The Filax 2 costs €350 which was beyond what I wanted to spend, but the second-hand price was acceptable.&lt;/p&gt;
&lt;p&gt;My solar setup is not something that I can just turn on and forget about. I have to keep an eye on the battery, especially because lead-acid should ideally be recharged within 24 hours. &lt;/p&gt;
&lt;p&gt;&lt;img alt="happy case" src="https://louwrentius.com/static/images/solarupdate/solarupdate07.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Happy case: the battery is full and my computer desk is 100% solar-powered&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;It's now April 2023 and my setup seems promising. Peak output of the two 370 Watt solar panels facing west was 230 Watts. Only for a very short period, but it makes me confident for spring and summer. I could automate enabling and disabling the inverter, with a relay and some logic in Python, but for now I'm good with manually operating the inverter.&lt;/p&gt;
&lt;p&gt;You may have noticed that I've used a lot of Victron equipment&lt;sup id="fnref:sponsor"&gt;&lt;a class="footnote-ref" href="#fn:sponsor"&gt;13&lt;/a&gt;&lt;/sup&gt;. Mostly because it seems high-quality and the data interfaces are documented and easy-to-use. The inverter was also chosen because of the low parasitic load (self-consumption) of around 6 watt. Victron equipment is not cheap. Buying Victron gear second-hand can save a lot of money.&lt;/p&gt;
&lt;p&gt;Speaking of cost, if I include all the cost I made, including previous solar projects and mistakes, I think I spend around €2000.&lt;/p&gt;
&lt;p&gt;That's all I have to say about my hobby solar project for now.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://news.ycombinator.com/item?id=35596959#35597492"&gt;Link&lt;/a&gt; to Hacker News thread about this article.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:concept"&gt;
&lt;p&gt;Their attempt was quite serious and precise. They accounted for the energy used to produce the equipment. They went as far as dithering images to reduce bandwidth and thus energy usage.&amp;#160;&lt;a class="footnote-backref" href="#fnref:concept" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:insane"&gt;
&lt;p&gt;The cost can never be reclaimed by the electricity savings. Also, the energy produced to make all the components would never be recovered due to my west-facing setup.&amp;#160;&lt;a class="footnote-backref" href="#fnref:insane" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:donotbuy"&gt;
&lt;p&gt;never buy those cheap non-MPPT solar charge controllers unless you really know what you are doing. You are better off with a MPPT controller which is much better at getting the most energy out of a solar panel.&amp;#160;&lt;a class="footnote-backref" href="#fnref:donotbuy" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:panel"&gt;
&lt;p&gt;Jinko half cut 120 cell 370 WP JKM370N-6TL3-B&amp;#160;&lt;a class="footnote-backref" href="#fnref:panel" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:noexperience"&gt;
&lt;p&gt;I have absolutely no experience with designing and building aluminium tube frames. After you're done laughing at this contraption, if you have a better, more efficient design, I'm still interested.&amp;#160;&lt;a class="footnote-backref" href="#fnref:noexperience" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:cheat"&gt;
&lt;p&gt;I may have cheated once by recharging the battery from the grid just to protect it against accelerated aging due to being in a prolonged (partially) discharged state.&amp;#160;&lt;a class="footnote-backref" href="#fnref:cheat" title="Jump back to footnote 6 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:bg"&gt;
&lt;p&gt;Suddenly you realise that making the background on all monitors black saves ~20 Watt. My blog should be dark-themed to reduce energy usage 😅&amp;#160;&lt;a class="footnote-backref" href="#fnref:bg" title="Jump back to footnote 7 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:risk"&gt;
&lt;p&gt;I've actually oversized the battery cabling for safety reasons. If a length of cable isn't rated for the current flowing through it, it becomes a resistor, generating heat, which can cause a fire so I want to be carefull.&amp;#160;&lt;a class="footnote-backref" href="#fnref:risk" title="Jump back to footnote 8 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:no12"&gt;
&lt;p&gt;If you ever intend to build some kind of solar setup yourself, consider a 24 Volt or ideally an 48 Volt system to reduce currents and thus save on cabling cost.&amp;#160;&lt;a class="footnote-backref" href="#fnref:no12" title="Jump back to footnote 9 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:cover"&gt;
&lt;p&gt;The + and - pole are temporary uncovered for this picture, but normally they are covered to prevent a short-circuit if anything would fall on the poles.&amp;#160;&lt;a class="footnote-backref" href="#fnref:cover" title="Jump back to footnote 10 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:la"&gt;
&lt;p&gt;A sealed lead-acid battery like the one I'm using is safe and won't release any (explosive) gasses unless overcharged or abused.&amp;#160;&lt;a class="footnote-backref" href="#fnref:la" title="Jump back to footnote 11 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:protection"&gt;
&lt;p&gt;The inverter uses a dynamic load algorithm to prevent deep discharge of the battery. Ideally a lead-acid battery should never be discharged beyond 50% of capacity and it seems to work perfectly. Dumb inverters, just discharge until 10.5 volt under load, which means the battery is almost depleted, causing rapid aging and significantly reduced life-span.&amp;#160;&lt;a class="footnote-backref" href="#fnref:protection" title="Jump back to footnote 12 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:sponsor"&gt;
&lt;p&gt;No, I'm not sponsored by Victron, I wish 😅💸&amp;#160;&lt;a class="footnote-backref" href="#fnref:sponsor" title="Jump back to footnote 13 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="Solar"></category></entry><entry><title>Benchmarking cheap SSDs for fun, no profit (be warned)</title><link href="https://louwrentius.com/benchmarking-cheap-ssds-for-fun-no-profit-be-warned.html" rel="alternate"></link><published>2023-03-26T12:00:00+02:00</published><updated>2023-03-26T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2023-03-26:/benchmarking-cheap-ssds-for-fun-no-profit-be-warned.html</id><summary type="html">&lt;p&gt;The price of Solid-state drives (SSDs) has dropped significantly over the last few years. It's now possible to buy a 1TB solid-state drive for less than €60. However, at such low price points, there is a catch.&lt;/p&gt;
&lt;p&gt;Although cheap SSDs do perform fine regarding reads, sustained write performance can be …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The price of Solid-state drives (SSDs) has dropped significantly over the last few years. It's now possible to buy a 1TB solid-state drive for less than €60. However, at such low price points, there is a catch.&lt;/p&gt;
&lt;p&gt;Although cheap SSDs do perform fine regarding reads, sustained write performance can be really atrocious. To demonstrate this concept, I bought a bunch of the cheapest SATA SSDs I could find - as listed below - and benchmarked them with &lt;a href="https://fio.readthedocs.io/en/latest/fio_doc.html"&gt;Fio&lt;/a&gt;. &lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Capacity&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ADATA Ultimate SU650&lt;/td&gt;
&lt;td&gt;240 GB&lt;/td&gt;
&lt;td&gt;€ 15,99&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PNY CS900&lt;/td&gt;
&lt;td&gt;120 GB&lt;/td&gt;
&lt;td&gt;€ 14,56&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kingston A400&lt;/td&gt;
&lt;td&gt;120 GB&lt;/td&gt;
&lt;td&gt;€ 20,85&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verbatim Vi550 S3&lt;/td&gt;
&lt;td&gt;128 GB&lt;/td&gt;
&lt;td&gt;€ 14,99&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;I didn't have the budget to buy a bunch of 1TB of 2TB SSD, so these ultra-cheap, low capacity SSDs are a bit of a stand-in. I've also added a Crucial MX500 1TB (CT1000MX500SSD1) SATA&lt;sup id="fnref:sata"&gt;&lt;a class="footnote-ref" href="#fn:sata"&gt;1&lt;/a&gt;&lt;/sup&gt; SSD - which I already owned - to the benchmarks to see how well those small-capacity SSDs stack up to a cheap SSD with a much larger capacity.&lt;/p&gt;
&lt;h3&gt;Understanding SSD write performance&lt;/h3&gt;
&lt;p&gt;To understand the benchmark results a bit better, we discuss some SSD concepts in this section. Feel free to skip to the actual benchmarks if you're already familiar with them.&lt;/p&gt;
&lt;h4&gt;SLC Cache&lt;/h4&gt;
&lt;p&gt;SSDs originally used single-level cell (SLC) flash memory, which can hold a single bit and is the fastest and most reliable flash memory available. Unfortunately, it's also the most expensive. To reduce cost, multi-level cell (MLC) flash was invented, which can hold two bits instead of one, at the cost of speed and longevity&lt;sup id="fnref:longevity"&gt;&lt;a class="footnote-ref" href="#fn:longevity"&gt;2&lt;/a&gt;&lt;/sup&gt;. This is even more so for triple-level cell (TLC) and quad-level cell (QLC) flash memory. All 'cheap' SSDs I benchmark use 3D v-nand&lt;sup id="fnref:vnand"&gt;&lt;a class="footnote-ref" href="#fn:vnand"&gt;3&lt;/a&gt;&lt;/sup&gt; TLC flash memory.&lt;/p&gt;
&lt;p&gt;One technique to temporarily boost SSD performance is to use a (small) portion of (in our case) TLC flash memory as if it was SLC memory. This SLC memory then acts as a fast write cache&lt;sup id="fnref:dynamic"&gt;&lt;a class="footnote-ref" href="#fn:dynamic"&gt;4&lt;/a&gt;&lt;/sup&gt;. When the SSD is idle, data is moved from the SLC cache to the TLC flash memory in the background. However, this process is limited by the speed of the 'slower' TLC flash memory and can take a while to complete.&lt;/p&gt;
&lt;p&gt;While this trick with SLC memory works well for brief, intermittent write loads, sustained write loads will fill up the SLC cache and cause a significant drop in performance as the SSD is forced to write data into slower TLC memory.&lt;/p&gt;
&lt;h4&gt;DRAM cache&lt;/h4&gt;
&lt;p&gt;As flash memory has a limited lifespan and can only take a limited number of writes, a wear-leveling mechanism is used to distribute writes over all cells evenly, regardless of where data is written logically. Keeping track of this mapping between logical and physical 'locations' can be sped up with a DRAM cache (chip) as DRAM tend to be faster than flash memory. In addition, the DRAM can also be used to cache writes, improving performance. Cheap SSDs don't use DRAM cache chips to reduce cost, thus they have to update their data mapping tables in flash memory, which is slower. This can also impact (sustained) write performance. To be frank, I'm not sure how much a lack of DRAM impacts our benchmarks.&lt;/p&gt;
&lt;h3&gt;Benchmark method&lt;/h3&gt;
&lt;p&gt;Before I started benchmarking I submitted a trim command to clear each drive.
Next, I performed a sequential write benchmark of the entire SSD with a block size of 1 megabyte and a queue depth of 32. The benchmark is performed on the 'raw' device, no filesystem is used. I used &lt;a href="https://fio.readthedocs.io/en/latest/fio_doc.html"&gt;Fio&lt;/a&gt; for these benchmarks.&lt;/p&gt;
&lt;h3&gt;Benchmark results&lt;/h3&gt;
&lt;p&gt;The chart below shows write bandwith over time for all tested SSDs. Each drive has been benchmarked in full, but the data is truncated to the first 400 seconds for readability (performance didn't change). The raw &lt;a href="https://fio.readthedocs.io/en/latest/fio_doc.html"&gt;Fio&lt;/a&gt; benchmark data can be found &lt;a href="https://louwrentius.com/files/benchmarkdata.tgz"&gt;here&lt;/a&gt; (.tgz).&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/cheapssd01.png"&gt;&lt;img alt="chart" src="https://louwrentius.com/static/images/cheapssd01.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;click for a larger image&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;It's funny to me that some cheap SSDs initially perform way better than the more expensive Crucial 1TB SSD&lt;sup id="fnref:crucial"&gt;&lt;a class="footnote-ref" href="#fn:crucial"&gt;5&lt;/a&gt;&lt;/sup&gt;. As soon as their SLC cache runs out, the Crucial 1TB has the last laugh as it shows best sustained throughput, beating all cheaper drives, but the Kingston A400 comes close. &lt;/p&gt;
&lt;p&gt;Of all the cheap SSDs only the Kingston shows the best sustained write speed at around 100 MB/s and there are no intermittent drops in performance. The ADATA, PNY and Verbatim SSDs show flakey behaviour and basically terrible sustained write performance. But make no mistake, I would not call the performance of the Kingston SSD, nor the Crucial SSD - added as a reference - 'good' by any definition of that word. Even the Kingston can't saturate gigabit Ethernet.&lt;/p&gt;
&lt;p&gt;The bandwidth alone doesn't tell the whole story. The latency or responsiveness of the SSDs is also significantly impacted:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/cheapssd03.png"&gt;&lt;img alt="chart" src="https://louwrentius.com/static/images/cheapssd03.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;click for a larger image&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The Crucial 1TB SSD shows best latency overall, followed by the Kingston SSD. The rest of the cheap SSDs show quite high latency spikes and very high latency overall, even when some of the spikes settle, like for the ADATA SSD. When latency is measured in seconds, things are bad.&lt;/p&gt;
&lt;p&gt;To put things a bit in perspective, let's compare these results to a Toshiba 8 TB 7200 RPM hard drive I had lying around.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/cheapssd02.png"&gt;&lt;img alt="chart" src="https://louwrentius.com/static/images/cheapssd02.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;click for a larger image&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The hard drive shows better write throughput and latency&lt;sup id="fnref:funny"&gt;&lt;a class="footnote-ref" href="#fn:funny"&gt;6&lt;/a&gt;&lt;/sup&gt; as compared to most of the tested SSDs. Yes, except for the initial few minutes where the cheap SSDs tend to be faster (except for the Kingston &amp;amp; Crucial SSDs) but how much does that matter?&lt;/p&gt;
&lt;p&gt;As we've shown the performance of a hard drive to contrast the terrible write performance of the cheap SSDs, it's time to also compare them to a more expensive, higher-tier SSD. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/cheapssd05.png"&gt;&lt;img alt="chart" src="https://louwrentius.com/static/images/cheapssd05.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;click for a larger image&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I've bought this Samsung SSD in 2019 for €137 euro, so that's quite a different price point. I think the graph speaks for itself, especially if you consider that this graph is not truncated, this is the full drive write.&lt;/p&gt;
&lt;h3&gt;Evaluation &amp;amp; conclusion&lt;/h3&gt;
&lt;p&gt;One of the funnier conclusions to draw is that it's beter to use a hard drive than to use cheap SSDs if you need to ingest a lot of data. Even the Crucial 1TB SSD could not keep up with the HDD.&lt;/p&gt;
&lt;p&gt;A more interesting conclusion is that the 1TB SSD didn't perform that much better than the small cheaper SSDs. Or to put it differently: although the performance of the small, cheap SSDs is not representative of the larger SSD, it is still quite in the same ball park. I don't think it's a coincidence that the Kingston SSD came very close to the performance of the Crucial SSD, as it's the most 'expensive' of the cheap drives.&lt;/p&gt;
&lt;p&gt;In the end, my intend was to demonstrate with actual benchmarks how cheap SSDs show bad sustained write performance and I think I succeeded. I hope it helps people to understand that good SSD write performance is not a given, especially for cheaper drives.&lt;/p&gt;
&lt;p&gt;The Hacker News discussion of this blog post can be &lt;a href="https://news.ycombinator.com/item?id=35325883"&gt;found here&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Disclaimer&lt;/h3&gt;
&lt;p&gt;I'm not sponsored in any way. All mentioned products have been bought with my own money.&lt;/p&gt;
&lt;p&gt;The graphs are created with &lt;a href="https://github.com/louwrentius/fio-plot"&gt;fio-plot&lt;/a&gt;, a tool I've made and maintain.
The benchmarks have been performed with bench-fio, a tool included with fio-plot, to automate benchmarking with Fio.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:sata"&gt;
&lt;p&gt;As I don't have a test system with NVMe, I had to use SATA-based SSDs. The fact that the SATA interface was not the limiting factor in any of the tests, is foreboding.&amp;#160;&lt;a class="footnote-backref" href="#fnref:sata" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:longevity"&gt;
&lt;p&gt;As a general note, I think the vast majority of users should not worry about SSD longevity in general. Only people with high-volume write workloads should keep an eye on write endurance of SSD and buy a suitable product.&amp;#160;&lt;a class="footnote-backref" href="#fnref:longevity" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:vnand"&gt;
&lt;p&gt;instead of packing the bits really dense together in a cell horizontally, the bits are stacked vertically, saving horizontal space. This allows for higher data densities in the same footprint.&amp;#160;&lt;a class="footnote-backref" href="#fnref:vnand" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:dynamic"&gt;
&lt;p&gt;Some SSDs have a static SLC cache, but others size the SLC cache in accordance to how full an SSD is. When the SSD starts to fill up, the SLC cache size is reduced.&amp;#160;&lt;a class="footnote-backref" href="#fnref:dynamic" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:crucial"&gt;
&lt;p&gt;After around 45-50 minutes of testing, performance of the Crucial MX 500 also started to drop to around 40 MB/s and fluctuate up and down. &lt;a href="https://louwrentius.com/static/images/cheapssd06.png"&gt;Evidence&lt;/a&gt;.&amp;#160;&lt;a class="footnote-backref" href="#fnref:crucial" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:funny"&gt;
&lt;p&gt;it's so funny to me that a hard drive beats an SSD on latency.&amp;#160;&lt;a class="footnote-backref" href="#fnref:funny" title="Jump back to footnote 6 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="storage"></category></entry><entry><title>How to setup a local or private Ubuntu Mirror</title><link href="https://louwrentius.com/how-to-setup-a-local-or-private-ubuntu-mirror.html" rel="alternate"></link><published>2023-01-18T12:00:00+01:00</published><updated>2023-01-18T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2023-01-18:/how-to-setup-a-local-or-private-ubuntu-mirror.html</id><summary type="html">&lt;h2&gt;Preface&lt;/h2&gt;
&lt;p&gt;In this article I provide instructions on how to setup a local Ubuntu mirror using &lt;a href="https://help.ubuntu.com/community/Debmirror"&gt;&lt;em&gt;debmirror&lt;/em&gt;&lt;/a&gt;. To set expectations: the mirror will work as intended and distribute packages and updates, but a &lt;em&gt;do-release&lt;/em&gt; upgrade from one major version of Ubuntu to the next won't work.&lt;/p&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;By default, Ubuntu …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Preface&lt;/h2&gt;
&lt;p&gt;In this article I provide instructions on how to setup a local Ubuntu mirror using &lt;a href="https://help.ubuntu.com/community/Debmirror"&gt;&lt;em&gt;debmirror&lt;/em&gt;&lt;/a&gt;. To set expectations: the mirror will work as intended and distribute packages and updates, but a &lt;em&gt;do-release&lt;/em&gt; upgrade from one major version of Ubuntu to the next won't work.&lt;/p&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;By default, Ubuntu systems get their updates straight from the internet at &lt;a href="https://archive.ubuntu.com"&gt;archive.ubuntu.com&lt;/a&gt;. In an environment with lots of Ubuntu systems (servers and/or desktops) this can cause a lot of internet traffic as each system needs to download the same updates.&lt;/p&gt;
&lt;p&gt;In an environment like this, it would be more efficient if one system would download all Ubuntu updates just &lt;em&gt;once&lt;/em&gt; and distribute them to the clients. In this case, updates are distributed using the local network, removing any strain on the internet link&lt;sup id="fnref:02"&gt;&lt;a class="footnote-ref" href="#fn:02"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img alt="diagram" src="https://louwrentius.com/static/images/mirror02.png" /&gt;&lt;/p&gt;
&lt;p&gt;We call such a system a local mirror and it's just a web server with sufficient storage to hold the Ubuntu archive (or part of it). A local mirror is especially relevant for sites with limited internet bandwidth, but there are some extra benefits.&lt;/p&gt;
&lt;p&gt;To sum up the main benefits:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Reduced internet bandwidth usage&lt;/li&gt;
&lt;li&gt;Faster update proces using the local network (often faster than internet link)&lt;/li&gt;
&lt;li&gt;Update or install systems even during internet or upstream outage&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The main drawbacks of a local mirror are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;An extra service to maintain and monitor&lt;/li&gt;
&lt;li&gt;Storage requirement: starts at 1TB&lt;/li&gt;
&lt;li&gt;Initial sync can take a long time depending on internet speed&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Mirror solutions&lt;/h2&gt;
&lt;h3&gt;Ubuntu mirror script&lt;/h3&gt;
&lt;p&gt;This solution is geared towards ISPs or companies who like to run their own regional mirror. It is meant to mirror the entire, unfiltered Ubuntu package archive.&lt;/p&gt;
&lt;p&gt;As of 2023 you should expect 2.5TB for archive.ubuntu.com and also around 2.5 TB for ports.ubuntu.com (ARM/RISCV and others).&lt;/p&gt;
&lt;p&gt;This is a lot of storage and likely not what most environments need. Even so, if this is what you want to run you can consult &lt;a href="https://wiki.ubuntu.com/Mirrors/Scripts"&gt;this web page&lt;/a&gt; and use the script mentioned here. &lt;/p&gt;
&lt;h3&gt;debmirror&lt;/h3&gt;
&lt;p&gt;Based on my own research, it seems that the tool Debmirror is the most simple and straight-forward way to create a local Ubuntu mirror with a reasonable data footprint of about 480 GB (2023) for both Jammy AMD64 (22.04) and Focal AMD64 (20.04).&lt;/p&gt;
&lt;p&gt;Based on on your needs, you can further finetune Debmirror to only download the pacakges that you need for your environment. &lt;/p&gt;
&lt;h3&gt;apt-cacher-ng&lt;/h3&gt;
&lt;p&gt;The tool &lt;a href="https://help.ubuntu.com/community/Apt-Cacher%20NG"&gt;apt-cacher-ng&lt;/a&gt; acts as a caching proxy and only stores updates that are requested by clients. Missing or new updates are only downloaded once the first client requests this download, although there seem to be option to pre-download updates.&lt;/p&gt;
&lt;p&gt;Although I expect a significantly smaller footprint than debmirror, I could not find any information about actual real-life disk usage. &lt;/p&gt;
&lt;h2&gt;Creating an Ubuntu mirror with debmirror&lt;/h2&gt;
&lt;p&gt;Although apt-cacher-ng is quite a capable solution which many additional features, I feel that a simple mirror solution like debmirror is extremely simple to setup and maintain. This article will this focus on debmirror.&lt;/p&gt;
&lt;h2&gt;Preparation&lt;/h2&gt;
&lt;h3&gt;1 - Computer&lt;/h3&gt;
&lt;p&gt;First of all we need a computer - which can be either physical or virtual - that can act as the local mirror. I've used a Raspberry Pi 4B+ as a mirror with an external USB hard drive and it can saturate a local 1 Gbit network with ease.&lt;/p&gt;
&lt;h3&gt;2 - 1TB storage capacity (minimum)&lt;/h3&gt;
&lt;p&gt;I'm mirroring Ubuntu 22.04 and 20.04 for AMD64 architecture and that uses around 480 GB (2023). For ARM64, you should expect a similar storage footprint. There should be some space available for future growth so that's why I recommend to have at least 1 TB of space available.&lt;/p&gt;
&lt;p&gt;Aside from capacity, you should also think about the importance of redundancy: what if the mirror storage device dies and you have to redownload all data? Would this impact be worth the investment in redundancy / RAID?&lt;/p&gt;
&lt;p&gt;It might even be interesting to use a filesystem (layer) like ZFS or LVM that support snapshots to quickly restore the mirror to a known good state if there has been an issue with a recent sync.&lt;/p&gt;
&lt;h3&gt;3 - Select a local public Ubuntu archive&lt;/h3&gt;
&lt;p&gt;It's best to sync your local mirror with a &lt;a href="https://launchpad.net/ubuntu/+archivemirrors"&gt;public Ubuntu archive&lt;/a&gt; close to your physical location. This provides the best internet performance and you also reduce the strain on the global archive. Use the linked mirror list to pick the best mirror for your location.&lt;/p&gt;
&lt;p&gt;In my case, I used nl.archive.ubuntu.com as I'm based in The Netherlands. &lt;/p&gt;
&lt;h2&gt;Ubuntu Mirror configuration&lt;/h2&gt;
&lt;h3&gt;01 - Add the storage device / volume to the fstab&lt;/h3&gt;
&lt;p&gt;If you haven't done so already, make sure you create a directory as a mountpoint for the storage we will use for the mirror.&lt;/p&gt;
&lt;p&gt;In my case I've created the /mirror directory...&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mkdir /mirror
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;... and updated the fstab like this (example!):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/dev/disk/by-uuid/154d28fb-83d0-4848-ac1d-da1420252422 /mirror xfs noatime 0 0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I recommend using the by-uuid or by-id path for mounting the storage device as it's most stable and don't forget the use the correct filesystem (xfs/ext4).&lt;/p&gt;
&lt;p&gt;Now we can issue:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mount /mirror
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;02 - Install required software&lt;/h3&gt;
&lt;p&gt;We need a webserver installed on the mirror to serve the deb packages to the clients. Installation is straightforward and no further configuration is required. In this example I'm using Apache2 but you can use any webserver you're comfortable with.&lt;/p&gt;
&lt;p&gt;If you want to synchronise with the upstream  mirror using regular HTTP you don't need additional software. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get update
apt install apache2 debmirror gnupg xz-utils
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I think that using rsync for synchronisation is more efficient and faster but you have to configure your firewall to allow outbound traffic to TCP port 873 (which is outside the scope of this tutorial)&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt install rsync
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: make sure you run debmirror on a 20.04 or 22.04 system as older versions don't support current ubuntu mirrors and some required files won't be downloaded.&lt;/p&gt;
&lt;h3&gt;03 - Creating file paths&lt;/h3&gt;
&lt;p&gt;I've created this directory structure to host my local mirror repos.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/mirror/
├── debmirror
│   ├── amd64
│   │   ├── dists
│   │   ├── pool
│   │   └── project
│   └── mirrorkeyring
└── scripts


mkdir /mirror/debmirror
mkdir /mirror/debmirror/amd64
mkdir /mirror/debmirror/mirrorkeyring
mkdir /mirror/scripts
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The folders within the amd64 directory will be created by debmirror so they don't have to be created in advance.&lt;/p&gt;
&lt;h3&gt;04 - install GPG keyring&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;gpg&lt;span class="w"&gt; &lt;/span&gt;--no-default-keyring&lt;span class="w"&gt; &lt;/span&gt;--keyring&lt;span class="w"&gt; &lt;/span&gt;/mirror/debmirror/mirrorkeyring/trustedkeys.gpg&lt;span class="w"&gt; &lt;/span&gt;--import&lt;span class="w"&gt; &lt;/span&gt;/usr/share/keyrings/ubuntu-archive-keyring.gpg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;05 - Create symlinks&lt;/h3&gt;
&lt;p&gt;We need to create symlinks in the apache2 /var/www/html directory that point to our mirror like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;/var/www/html
ln&lt;span class="w"&gt; &lt;/span&gt;-s&lt;span class="w"&gt; &lt;/span&gt;/mirror/debmirror/amd64&lt;span class="w"&gt; &lt;/span&gt;ubuntu
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;06 - Configure debmirror&lt;/h3&gt;
&lt;p&gt;Debmirror is just a command-line tool that takes a lot of arguments. If we want to run this tool daily to keep our local mirror in sync, it's best to use a wrapper script that can be called by cron.&lt;/p&gt;
&lt;p&gt;Such a wrapper script is provided by &lt;a href="https://help.ubuntu.com/community/Debmirror"&gt;this page&lt;/a&gt; and I have included my own customised version &lt;a href="https://louwrentius.com/files/debmirroramd64.sh.txt"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can download this script and place it in /mirror/scripts like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;/mirror/scripts
wget&lt;span class="w"&gt; &lt;/span&gt;https://louwrentius.com/files/debmirroramd64.sh.txt&lt;span class="w"&gt; &lt;/span&gt;-O&lt;span class="w"&gt; &lt;/span&gt;debmirroramd64.sh&lt;span class="w"&gt; &lt;/span&gt;
chmod&lt;span class="w"&gt; &lt;/span&gt;+x&lt;span class="w"&gt; &lt;/span&gt;debmirroramd64.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now we need to edit this script and change some parameters to your specific requirements. The changes I've made as compared to the example are:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nb"&gt;export&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;GNUPGHOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/mirror/debmirror/mirrorkeyring
&lt;span class="nv"&gt;release&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;focal,focal-security,focal-updates,focal-backports,jammy,jammy-security,jammy-updates,jammy-backports
&lt;span class="nv"&gt;server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nl.archive.ubuntu.com
&lt;span class="nv"&gt;proto&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rsync
&lt;span class="nv"&gt;outPath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/mirror/debmirror/amd64
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The Ubuntu installer ISO for 20.04 and 22.04 seem to require the -backports releases too so those are included.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Limitations&lt;/strong&gt;
I've not been able (yet) to make the do-release-upgrade process work to upgrade a system from focal to jammy. I found this &lt;a href="https://makandracards.com/makandra/12439-setup-an-ubuntu-mirror-that-enables-local-release-upgrades"&gt;old resource&lt;/a&gt; but those instructions don't seem to work for me.&lt;/p&gt;
&lt;h3&gt;07 - Limiting bandwidth&lt;/h3&gt;
&lt;p&gt;The script by default doesn't provide a way to limit rsync bandwidth usage. In my script, I've added some lines to make bandwidth limiting work as an option. &lt;/p&gt;
&lt;p&gt;A new variable is added that must be uncommented and can be set to the desired limit. In this case 1000 means 1000 Kilobytes per second.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;bwlimit=1000
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You also need to uncomment this line:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;--rsync-options &amp;quot;-aIL --partial --bwlimit=$bwlimit&amp;quot; \
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;08 - Initial sync&lt;/h3&gt;
&lt;p&gt;It may be advised not to first run the initial sync before we configure a periodic cron job to do a daily sync. The first sync can take a long time and may interfere with the cron job. It may be advised to only enable the cronjob once the initial sync is completed.&lt;/p&gt;
&lt;p&gt;As the initial sync can take a while, I like to run this job with screen. If you accidentally close the terminal, the rsync process isn't interrupted (although this isnot a big deal if that happens, it just continues where it left off).&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt&lt;span class="w"&gt; &lt;/span&gt;install&lt;span class="w"&gt; &lt;/span&gt;screen
screen&lt;span class="w"&gt; &lt;/span&gt;/mirror/scripts/debmirroramd64.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;09 - Setup cron job&lt;/h3&gt;
&lt;p&gt;When the initial sync is completed we can configure the cron job to sync periodically.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;*&lt;span class="w"&gt; &lt;/span&gt;*&lt;span class="w"&gt; &lt;/span&gt;*&lt;span class="w"&gt; &lt;/span&gt;/mirror/scripts/debmirroramd64.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this case the sync runs daily at 1 AM.&lt;/p&gt;
&lt;p&gt;The mirror includes all security updates so depending on your environment, it's recommended to synchronise the mirror at least daily.&lt;/p&gt;
&lt;h3&gt;10 - Client configuration&lt;/h3&gt;
&lt;p&gt;All clients should point to your local mirror in their /etc/apt/sources.list file. You can use the IP-address of your mirror, but if you run a local DNS, it's not much effort to setup a DNS-record like mirror.your.domain and have all clients reconfigured to connect to the domain name.&lt;/p&gt;
&lt;p&gt;This is the /etc/apt/sources.list for the client&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;deb&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;http://mirror.your.domain/ubuntu&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;RELEASE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;main&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;restricted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;universe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;multiverse&lt;/span&gt;
&lt;span class="k"&gt;deb&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;http://mirror.your.domain/ubuntu&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;RELEASE-security&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;main&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;restricted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;universe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;multiverse&lt;/span&gt;
&lt;span class="k"&gt;deb&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;http://mirror.your.domain/ubuntu&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;RELEASE-updates&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;main&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;restricted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;universe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;multiverse&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The RELEASE value should be changed to the appropriate ubuntu release, like bionic, focal or jammy.&lt;/p&gt;
&lt;p&gt;If you have an environment with a lot of Ubuntu systems, this configuration is likely provisioned with tools like ansible.&lt;/p&gt;
&lt;h3&gt;11 - Monitoring&lt;/h3&gt;
&lt;p&gt;Although system monitoring is out-of-scope for this blog post, there are two topics to monitor: &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;disk space usage (alert if space is running out)&lt;/li&gt;
&lt;li&gt;succesfull synchronisation script execution (alert if script fails)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you don't monitor the synchronisation process, the  mirror will become out-dated and will lack the latest security updates.&lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;As many environments are either cloud-native or moving towars a cloud-environment, running a local mirror seems less and less relevant. Yet there may still be environments that could benefit from a local mirror setup. Maybe this instruction is helpful.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:02"&gt;
&lt;p&gt;You may notice that cloud provides actually also run their own Ubuntu archive mirror to reduce the load on their upstream and peering links. When you deploy a standard virtual machine based on Ubuntu, it is by default configured to use the local mirror.&amp;#160;&lt;a class="footnote-backref" href="#fnref:02" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Linux"></category><category term="Linux"></category></entry><entry><title>I resurrected my Dutch movie review site from 2003</title><link href="https://louwrentius.com/i-resurrected-my-dutch-movie-review-site-from-2003.html" rel="alternate"></link><published>2022-06-09T12:00:00+02:00</published><updated>2022-06-09T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2022-06-09:/i-resurrected-my-dutch-movie-review-site-from-2003.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Between 2003 and 2006, I ran a &lt;em&gt;Dutch&lt;/em&gt; movie review site called &lt;a href="https://moevie.nl"&gt;moevie.nl&lt;/a&gt;.&lt;sup id="fnref:name"&gt;&lt;a class="footnote-ref" href="#fn:name"&gt;1&lt;/a&gt;&lt;/sup&gt;
I built the site and wrote the reviews. It never made any money. It cost me money to host, and it cost me a lot of time writing reviews, but I remember enjoying writing …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Between 2003 and 2006, I ran a &lt;em&gt;Dutch&lt;/em&gt; movie review site called &lt;a href="https://moevie.nl"&gt;moevie.nl&lt;/a&gt;.&lt;sup id="fnref:name"&gt;&lt;a class="footnote-ref" href="#fn:name"&gt;1&lt;/a&gt;&lt;/sup&gt;
I built the site and wrote the reviews. It never made any money. It cost me money to host, and it cost me a lot of time writing reviews, but I remember enjoying writing reviews about films I liked.&lt;/p&gt;
&lt;p&gt;The gimmick of the site was that the reviews had two parts. The first part is spoiler-free, just giving a recommendation with some context to make up your own mind. The second part contained a reflection of the movie, which included spoilers.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/moevieoldgroot.webp"&gt;&lt;img alt="moevie" src="https://louwrentius.com/static/images/moeviecropped.webp" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Even back then, the site didn't win any design awards (from archive.org - click to enlarge)&lt;sup id="fnref:sorry"&gt;&lt;a class="footnote-ref" href="#fn:sorry"&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I started building the site a few months after finishing college (IT) in 2002 as I felt inept and had little confidence. Building something tangible felt like a good way to build up and demonstrate skills. And I had something to say about movies.&lt;/p&gt;
&lt;p&gt;Although moevie.nl did not help me gain employment as far as I know, it was fun while it lasted. At some point, I didn't get much joy out of writing movie reviews and I let the site die. &lt;/p&gt;
&lt;p&gt;I did keep backups of the database the code and the pictures though. And now after 18+ years I decided to &lt;a href="https://moevie.nl"&gt;resurrect the site&lt;/a&gt;, including all (old) reviews.&lt;/p&gt;
&lt;h2&gt;Why resurrect a dead website gone for 16+ years?&lt;/h2&gt;
&lt;p&gt;Rebuilding the site was just a way to spend time, a small hobby project. Something to be bussy with. The second reason is some kind of misplaced nostalgia. I sometimes regret shutting down the site, wondering what could have been if I persevered.&lt;/p&gt;
&lt;h2&gt;Losing and regaining the domain&lt;/h2&gt;
&lt;p&gt;Back in 2006, my hosting provider (non-profit with just a few servers) abruptly stopped operating due to hardware failure &lt;sup id="fnref:neglect"&gt;&lt;a class="footnote-ref" href="#fn:neglect"&gt;3&lt;/a&gt;&lt;/sup&gt; and I was forced to move my domain to another company. At that time, private citizens in The Netherlands could not register an .nl domain, only businesses could, so that was a bit of a hassle.&lt;/p&gt;
&lt;p&gt;Not long thereafter however, I decided to let the domain expire. It was quickly scooped up by 'domain resellers'. Years later I decided that I wanted moevie.nl back, but the sellers always asked insane amounts of money.&lt;/p&gt;
&lt;p&gt;In 2019, I visited moevie.nl on a whim. To my surprise it didn't resolve anymore, the domain was available! I quickly scooped it up, but I didn't do much with it for a long time, until now.&lt;/p&gt;
&lt;h2&gt;Rebuilding the site&lt;/h2&gt;
&lt;p&gt;I really wanted to preserve the aesthetic of moevie.nl as it was back then. Especially in the context of modern web design, it does stand out. As a sore thumb - but still - I had a goal.&lt;/p&gt;
&lt;p&gt;Having the code and database dump is one thing, but it doesn't tell you what it actually looked like in 2003-2006. I could have tried to get the old (PHP4) code working, but I just didn't feel like it.&lt;/p&gt;
&lt;p&gt;Instead, I chose to visit &lt;a href="https://archive.org"&gt;Archive.org&lt;/a&gt; and indeed, it captured old snapshots of my site back in 2006. So those were of great help. The screenshots at the top of this blog post are lifted from this &lt;a href="https://web.archive.org/web/20060102013434/http://www.moevie.nl/"&gt;page&lt;/a&gt; on archive.org. This snapshot was taken just before I decided to close the site. &lt;/p&gt;
&lt;h2&gt;The challenge of mobile device screens&lt;/h2&gt;
&lt;p&gt;To set the stage a bit: the rise and fall of moevie.nl happened a year before the iPhone was first announced. Smartphones from Blackberry were popular. I had a &lt;a href="https://en.wikipedia.org/wiki/Palm_Vx"&gt;Palm VX&lt;/a&gt; PDA and later a &lt;a href="https://nl.wikipedia.org/wiki/IPAQ"&gt;HP Compaq PDA&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Most people didn't have mobile data connections so as far as I know, the mobile web really wasn't a thing yet.&lt;/p&gt;
&lt;p&gt;So moevie.nl was primarily developed for the desktop. When I thought I was finished rebuilding the site, I quickly discovered that the site was &lt;em&gt;unusable&lt;/em&gt; on my iPhone and way too small and finicky to use on my iPad.&lt;/p&gt;
&lt;p&gt;For somebody who has no experience with modern web development, it was quite a steep learning-curve discovering how to deal with the various screen sizes in CSS&lt;sup id="fnref:issues"&gt;&lt;a class="footnote-ref" href="#fn:issues"&gt;4&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;A very large part of the entire effort of rebuilding the site was spend on making the site workable on all different device sizes. Fortunately, iOS device simulators were of great help on that front. &lt;/p&gt;
&lt;h2&gt;Technology&lt;/h2&gt;
&lt;p&gt;I've recreated moevie.nl with Python and Django. For the database, I chose Postgresql, although that is total overkill, I could have used SQLite without any issues. &lt;/p&gt;
&lt;p&gt;I chose Django because I'm quite familiar with Python so that was a straight-forward choice. I selected Postgresql mostly just to regain some knowledge about it.&lt;/p&gt;
&lt;h2&gt;Hosting&lt;/h2&gt;
&lt;p&gt;I'm self-hosting moevie.nl on the same Raspbery Pi4 that is hosting this blog.
This Raspberry Pi is powered by &lt;a href="https://louwrentius.com/this-blog-is-now-running-on-solar-power.html"&gt;the sun&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So moevie.nl is solar-powered during the day and battery-powered during the night. &lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;I'm not sure if I really want to start writing movie reviews again, knowing full well how much effort it takes. Also I'm not sure I have anything to say about movies anymore, but we'll see.&lt;/p&gt;
&lt;p&gt;The overall experience of rebuilding the site was frustrating at times due to the severe lack of experience and knowledge. Now that the site is done and working, even on mobile devices, that feels good.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:name"&gt;
&lt;p&gt;The name is based on the phonetic pronunciation in Dutch of the English word 'movie'.&amp;#160;&lt;a class="footnote-backref" href="#fnref:name" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:sorry"&gt;
&lt;p&gt;sorry for the language but I could not find a better screenshot.&amp;#160;&lt;a class="footnote-backref" href="#fnref:sorry" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:neglect"&gt;
&lt;p&gt;I was neglecting the site at that time due to losing motivation.&amp;#160;&lt;a class="footnote-backref" href="#fnref:neglect" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:issues"&gt;
&lt;p&gt;I admit I only tested with iOS devices so Android-based smartphones could experience issues.&amp;#160;&lt;a class="footnote-backref" href="#fnref:issues" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Uncategorized"></category><category term="web"></category></entry><entry><title>An ode to the 10,000 RPM Western Digital (Veloci)Raptor</title><link href="https://louwrentius.com/an-ode-to-the-10000-rpm-western-digital-velociraptor.html" rel="alternate"></link><published>2021-10-30T12:00:00+02:00</published><updated>2021-10-30T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2021-10-30:/an-ode-to-the-10000-rpm-western-digital-velociraptor.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Back in 2004, I visited a now bankrupt Dutch computer store called MyCom&lt;sup id="fnref:bankrupt"&gt;&lt;a class="footnote-ref" href="#fn:bankrupt"&gt;1&lt;/a&gt;&lt;/sup&gt;, located at the Kinkerstraat in Amsterdam. I was there to buy a Western Digital Raptor model WD740, with 74 GB of capacity, running at 10,000 RPM.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/10k/mywd10k_l.jpg"&gt;&lt;img alt="mywd" src="https://louwrentius.com/static/images/10k/mywd10k_s.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;When I bought this drive, we were still …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Back in 2004, I visited a now bankrupt Dutch computer store called MyCom&lt;sup id="fnref:bankrupt"&gt;&lt;a class="footnote-ref" href="#fn:bankrupt"&gt;1&lt;/a&gt;&lt;/sup&gt;, located at the Kinkerstraat in Amsterdam. I was there to buy a Western Digital Raptor model WD740, with 74 GB of capacity, running at 10,000 RPM.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/10k/mywd10k_l.jpg"&gt;&lt;img alt="mywd" src="https://louwrentius.com/static/images/10k/mywd10k_s.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;When I bought this drive, we were still in the middle of the transition from the &lt;a href="https://en.wikipedia.org/wiki/Parallel_ATA"&gt;PATA interface&lt;/a&gt; to SATA&lt;sup id="fnref:sata"&gt;&lt;a class="footnote-ref" href="#fn:sata"&gt;2&lt;/a&gt;&lt;/sup&gt;. My raptor hard drive still had a molex connector because older computer power supplies didn't have SATA power connectors.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/10k/mywd10kconnector_l.jpg"&gt;&lt;img alt="olds" src="https://louwrentius.com/static/images/10k/mywd10kconnector_s.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You may notice that I eventually managed to break off the plastic tab of the SATA power connector. Fortunately, I could still power the drive through the Molex connector. &lt;/p&gt;
&lt;p&gt;A later version of the same drive came with the Molex connector disabled, as you can see below.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/10k/wd10knew_l.jpg"&gt;&lt;img alt="news" src="https://louwrentius.com/static/images/10k/wd10knew_s.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Why did the Raptor matter so much?&lt;/h2&gt;
&lt;p&gt;I was very eager to get this drive as it was quite a bit faster than any consumer drive on the market at that time. &lt;/p&gt;
&lt;p&gt;This drive not only made your computer start up faster, but it made it much more responsive. At least, it really felt like that to me at the time.&lt;/p&gt;
&lt;p&gt;The faster spinning drive wasn't so much about more throughput in MB/s - although that improved too - it was all about reduced &lt;em&gt;latency&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;A drive that spins faster&lt;sup id="fnref:hdperf"&gt;&lt;a class="footnote-ref" href="#fn:hdperf"&gt;3&lt;/a&gt;&lt;/sup&gt; can complete more I/O operations per second or IOPs&lt;sup id="fnref:understanding"&gt;&lt;a class="footnote-ref" href="#fn:understanding"&gt;4&lt;/a&gt;&lt;/sup&gt;. It can do more work in the same amount of time, because each operation takes less time, compared to slower turning drives.&lt;/p&gt;
&lt;p&gt;The Raptor - mostly focussed on desktop applications&lt;sup id="fnref:purpose"&gt;&lt;a class="footnote-ref" href="#fn:purpose"&gt;5&lt;/a&gt;&lt;/sup&gt; - brought a lot of relief for professionals and consumer enthusiasts alike. Hard disk performance, notably &lt;em&gt;latency&lt;/em&gt;, was one of the big performance bottlenecks at the time.&lt;/p&gt;
&lt;p&gt;For the vast majority of consumers or employees this bottleneck would start to be alleviated only well after 2010 when SSDs slowly started to become standard in new computers.&lt;/p&gt;
&lt;p&gt;And that's mostly also the point of SSDs: their I/O operations are measured in micro seconds instead of milliseconds. It's not that throughput (MB/s) doesn't matter, but for most interactive applications, you care about latency. That's what makes an old computer feel as new when you swap out the hard drive for an SSD.&lt;/p&gt;
&lt;h2&gt;The Raptor as a boot drive&lt;/h2&gt;
&lt;p&gt;For consumers and enthusiast, the Raptor was an amazing boot drive. The 74 GB model was large enough to hold the operating system and applications. The bulk of the data would still be stored on a second hard drive either also connected through SATA or even still through PATA.&lt;/p&gt;
&lt;p&gt;Running your computer with a Raptor for the boot drive, resulted in lower boot times and application load times. But most of all, the system &lt;em&gt;felt&lt;/em&gt; more responsive.&lt;/p&gt;
&lt;p&gt;And despite the 10,000 RPM speed of the platters, it wasn't that much louder than regular drives at the time.&lt;sup id="fnref:loud"&gt;&lt;a class="footnote-ref" href="#fn:loud"&gt;7&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/M2kAAV_kDH8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;

&lt;p&gt;In the video above, a Raspberry Pi4 boots from a 74 GB Raptor hard drive. &lt;/p&gt;
&lt;h2&gt;Alternatives to the raptor at that time&lt;/h2&gt;
&lt;p&gt;To put things into perspective, 10,000 RPM drives were quite common even in 2003/2004 for usage in servers. The server-oriented drives used the &lt;a href="https://en.wikipedia.org/wiki/Parallel_SCSI"&gt;SCSI&lt;/a&gt; interface/protocol which was incompatible with the on-board IDE/SATA controllers.&lt;/p&gt;
&lt;p&gt;Some enthusiasts - who had the means to do so - did buy both the controller&lt;sup id="fnref:SCSI"&gt;&lt;a class="footnote-ref" href="#fn:SCSI"&gt;8&lt;/a&gt;&lt;/sup&gt; and one or more SCSI 'server' drives to increase the performance of their computer. They could even get 15,000 RPM hard drives! These drives however, were extremely loud and had even less capacity.&lt;/p&gt;
&lt;p&gt;The Raptor did perform remarkably well in almost all circumstances, especially those who mattered to consumers and consumer enthusiasts alike. Suddenly you could get SCSI/Server performance for consumer prices.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://techreport.com/review/6390/western-digitals-raptor-wd740gd-sata-hard-drive/"&gt;in-depth review&lt;/a&gt; of the WD740 by Techreport really shows how significant the raptor was.&lt;/p&gt;
&lt;h2&gt;The Velociraptor&lt;/h2&gt;
&lt;p&gt;The Raptor eventually got replaced with the Velociraptor. The Velociraptor had a 2.5" formfactor, but it was much thicker than a regular 2.5" laptop drive. Because it spun at 10,000 RPM, the drive would get hot and thus it was mounted in an 'icepack' to disipate the generated heat. This gave the Velociraptor a 3.5" formfactor, just like the older Raptor drives.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/10k/velociraptor_l.jpeg"&gt;&lt;img alt="velociraptor" src="https://louwrentius.com/static/images/10k/velociraptor_s.jpeg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In the video below, a Raspberry Pi4 boots from a 500 GB Velociraptor hard drive.&lt;/p&gt;
&lt;iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/t6DkOhMr6MY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;

&lt;h2&gt;Benchmarking the (Veloci)raptor&lt;/h2&gt;
&lt;p&gt;Hard drives do well with sequential read/write patterns, but their performance implodes when the data access pattern becomes random. This is due to the mechanical nature of the device. That random access pattern is where 10,000 RPM outperform their slower turning siblings. &lt;/p&gt;
&lt;p&gt;Random 4K read performance showing both IOPs and latency. This is kind of a worst-case benchmark to understand the raw I/O and latency performance of a drive. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/10k/compare01_l.png"&gt;&lt;img alt="fios" src="https://louwrentius.com/static/images/10k/compare01_s.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left;"&gt;Drive ID&lt;/th&gt;
&lt;th&gt;Form Factor&lt;/th&gt;
&lt;th style="text-align: right;"&gt;RPM&lt;/th&gt;
&lt;th style="text-align: right;"&gt;Size (GB)&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;ST9500423AS&lt;/td&gt;
&lt;td&gt;2.5"&lt;/td&gt;
&lt;td style="text-align: right;"&gt;7200&lt;/td&gt;
&lt;td style="text-align: right;"&gt;500&lt;/td&gt;
&lt;td&gt;Seagate laptop hard drive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;WD740GD-75FLA1&lt;/td&gt;
&lt;td&gt;3.5"&lt;/td&gt;
&lt;td style="text-align: right;"&gt;10,000&lt;/td&gt;
&lt;td style="text-align: right;"&gt;74&lt;/td&gt;
&lt;td&gt;Western Digital Raptor WD740&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SAMSUNG HD103UJ&lt;/td&gt;
&lt;td&gt;3.5"&lt;/td&gt;
&lt;td style="text-align: right;"&gt;7200&lt;/td&gt;
&lt;td style="text-align: right;"&gt;1000&lt;/td&gt;
&lt;td&gt;Samsung Spintpoint F1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;WDC WD5000HHTZ&lt;/td&gt;
&lt;td&gt;2.5" in 3.5"&lt;/td&gt;
&lt;td style="text-align: right;"&gt;10,000&lt;/td&gt;
&lt;td style="text-align: right;"&gt;500&lt;/td&gt;
&lt;td&gt;Western Digital Velociraptor&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;ST2000DM008&lt;/td&gt;
&lt;td&gt;3.5"&lt;/td&gt;
&lt;td style="text-align: right;"&gt;7200&lt;/td&gt;
&lt;td style="text-align: right;"&gt;2000&lt;/td&gt;
&lt;td&gt;Seagate 3.5" 2TB drive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;MB1000GCWCV&lt;/td&gt;
&lt;td&gt;3.5"&lt;/td&gt;
&lt;td style="text-align: right;"&gt;7200&lt;/td&gt;
&lt;td style="text-align: right;"&gt;1000&lt;/td&gt;
&lt;td&gt;HP Branded Seagate 1 TB drive&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;I've tested the drives on an IBM M1015 SATA RAID card flashed to IT mode (HBA mode, no RAID firmware). The image is generated with &lt;a href="https://github.com/louwrentius/fio-plot"&gt;fio-plot&lt;/a&gt;, which also comes with a tool to run the &lt;a href="https://github.com/axboe/fio"&gt;fio&lt;/a&gt; benchmarks.&lt;/p&gt;
&lt;p&gt;It is quite clear that both 10,000 RPM drives outperform all 7200 rpm drives, as expected. &lt;/p&gt;
&lt;p&gt;If we compare the original 3.5" Raptor to the 2.5" Velociraptor, the performance increase is significant: 22% more IOPs and 18% lower latency. I think that performance increase is due to a combination of the higher data density, the smaller size (r/w head is faster in the spot it needs to be) and maybe better firmware.&lt;/p&gt;
&lt;p&gt;Both the laptop and desktop Seagate drives seem to be a bit slower than they should be based on theory. The opposite is true for the HP (rebranded Seagate), which seem to perform better than expected for the capacity and rotational speed. I have no idea why that is. I can only speculate that because the HP drive came out of a server, that the fireware was tuned for server usage patterns.&lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;Although the performance increase of the (veloci)raptor was quite significant, it never gained wide-spread adoption. Especially when the Raptor first came to marked, its primary role was that of a boot drive because of its small capacity.
You still needed a second drive for your data. So the increase in performance came at a significant extra cost.&lt;/p&gt;
&lt;p&gt;The Raptor and Velociraptor are now obsolete. You can get a solid state drive for $20 to $40 and even those budget-oriented SSDs will outperform a (Veloci)raptor many times over.&lt;/p&gt;
&lt;p&gt;If you are interested in more pictures and details, take a look at &lt;a href="https://goughlui.com/2017/12/23/tech-flashback-western-digital-raptor-velociraptor-hard-drives/"&gt;this article&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;This article was discussed on Hacker News &lt;a href="https://news.ycombinator.com/item?id=29049423"&gt;here&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;Reddit thread about this article can be found &lt;a href="https://www.reddit.com/r/hardware/comments/qjpe0o/an_ode_to_the_10000_rpm_western_digital/"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:bankrupt"&gt;
&lt;p&gt;Mycom, a chain store with quite a few shops in all major cities in The Netherlands, went bankrupt &lt;em&gt;twice&lt;/em&gt;, once in 2015 and finally in 2019.&amp;#160;&lt;a class="footnote-backref" href="#fnref:bankrupt" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:sata"&gt;
&lt;p&gt;We are talking about the first SATA version, with a maximum bandwidth capacity of 150 MB/s. Plenty enough for hard drives at that time.&amp;#160;&lt;a class="footnote-backref" href="#fnref:sata" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:hdperf"&gt;
&lt;p&gt;https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics&amp;#160;&lt;a class="footnote-backref" href="#fnref:hdperf" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:understanding"&gt;
&lt;p&gt;https://louwrentius.com/understanding-storage-performance-iops-and-latency.html&amp;#160;&lt;a class="footnote-backref" href="#fnref:understanding" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:purpose"&gt;
&lt;p&gt;&lt;a href="https://www.extremetech.com/computing/55810-review-western-digital-wd740-raptor"&gt;I read&lt;/a&gt; that WD intended the first Raptor (34 GB version) to be used in low-end servers as a cheaper alternative to SCSI drives . After the adoption of the Raptor by computer enthusiasts and professionals, it seems that Western Digital pivoted, so the next version - the 74 GB I have - was geared more towards desktop usage. That also meant that this 74 GB model got fluid bearings, making it quieter&lt;sup id="fnref:quiet"&gt;&lt;a class="footnote-ref" href="#fn:quiet"&gt;6&lt;/a&gt;&lt;/sup&gt;.&amp;#160;&lt;a class="footnote-backref" href="#fnref:purpose" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:quiet"&gt;
&lt;p&gt;The 74 GB model is actually rather quiet drive at idle. Drive activity sounds rather smooth and pleasant, no rattling.&amp;#160;&lt;a class="footnote-backref" href="#fnref:quiet" title="Jump back to footnote 6 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:loud"&gt;
&lt;p&gt;Please note that the first model, the 37 GB version, used ball bearings in stead of fluid bearings, and was reported to be significant louder.&amp;#160;&lt;a class="footnote-backref" href="#fnref:loud" title="Jump back to footnote 7 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:SCSI"&gt;
&lt;p&gt;Low-end SCSI card were often used to power flatbed scanners, Iomega ZIP drives, tape drives or other peripherals, but in order to benefit from the performance of those server hard drives, you needed a SCSI controller supporting higher bandwidth and those were more expensive.&amp;#160;&lt;a class="footnote-backref" href="#fnref:SCSI" title="Jump back to footnote 8 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="Storage"></category></entry><entry><title>Raspberry Pi as a router using a single network interface</title><link href="https://louwrentius.com/raspberry-pi-as-a-router-using-a-single-network-interface.html" rel="alternate"></link><published>2021-09-29T12:00:00+02:00</published><updated>2021-09-29T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2021-09-29:/raspberry-pi-as-a-router-using-a-single-network-interface.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Disclaimer: this article is intended for consumers and hobbyists.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;If you want to run your own router at home, the Raspberry Pi 4 Model B&lt;sup id="fnref:older"&gt;&lt;a class="footnote-ref" href="#fn:older"&gt;1&lt;/a&gt;&lt;/sup&gt; can be an excelent hardware choice:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;it's fairly cheap&lt;/li&gt;
&lt;li&gt;it's fast enough&lt;/li&gt;
&lt;li&gt;it can saturate it's gigabit network port&lt;/li&gt;
&lt;li&gt;it is power-efficient&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Disclaimer: this article is intended for consumers and hobbyists.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;If you want to run your own router at home, the Raspberry Pi 4 Model B&lt;sup id="fnref:older"&gt;&lt;a class="footnote-ref" href="#fn:older"&gt;1&lt;/a&gt;&lt;/sup&gt; can be an excelent hardware choice:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;it's fairly cheap&lt;/li&gt;
&lt;li&gt;it's fast enough&lt;/li&gt;
&lt;li&gt;it can saturate it's gigabit network port&lt;/li&gt;
&lt;li&gt;it is power-efficient&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The key problem it seems, is that it has only &lt;em&gt;one&lt;/em&gt;, single network interface. If you build a router, you need at least two:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The first interface connected to your internet modem/router (ideally in bridge mode)&lt;/li&gt;
&lt;li&gt;The second interface connected to your home network (probably a switch)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So if you would use the Raspberry Pi, you would probably buy a gigabit USB3 NIC for around $20 and be done with it.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/pirouter/pirouter01.png"&gt;&lt;img alt="routersetup" src="https://louwrentius.com/static/images/pirouter/pirouter01s.png" /&gt;&lt;/a&gt;
&lt;em&gt;click on the image for a larger version&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Now, what if I told you that you can build exactly the same setup by using &lt;em&gt;only&lt;/em&gt; the single on-board network interface of the Raspberry Pi 4? &lt;/p&gt;
&lt;p&gt;How is that possible? &lt;/p&gt;
&lt;h2&gt;Introducing VLANs&lt;/h2&gt;
&lt;p&gt;Yes, I'm introducing the reader to a technology that &lt;a href="https://en.wikipedia.org/wiki/Virtual_LAN"&gt;exists since the '90s&lt;/a&gt;. It is widely used within businesses and other organisations. &lt;/p&gt;
&lt;p&gt;Because I have a sense that this technology is less well-known in circles outside IT operations, I think it may be an interesting topic to discuss.&lt;/p&gt;
&lt;h2&gt;Understanding VLANs&lt;/h2&gt;
&lt;p&gt;VLAN technology allows you to run different, separate networks over the same, single physical wire and on the same, single switch. This saves a lot on network cabling and the number of physical switches required if you want to operate networks that are separate from each other.&lt;/p&gt;
&lt;p&gt;If you want to run traffic from different networks over the same physical wire or switch, how can you identify those different traffic flows? &lt;/p&gt;
&lt;p&gt;With VLAN technology enabled, such network 'packets' are labeled with a &lt;em&gt;tag&lt;/em&gt;. As the VLAN technology operates at the level of Ethernet, we should not talk about 'packets' but about 'ethernet frames'. The terminology is not important to understand the concept, I think.&lt;/p&gt;
&lt;p&gt;It suffices to understand that there is a tag put in front of the ethernet frame, that tells any device that supports VLANs to which network a &lt;em&gt;frame&lt;/em&gt; and thus a packet belongs. &lt;/p&gt;
&lt;p&gt;This way, network traffic flows, can be distinguished from each other. And those tags are nothing fancy, they are called a VLAN ID and it is just a number between 1 and 4096&lt;sup id="fnref:more"&gt;&lt;a class="footnote-ref" href="#fn:more"&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;h2&gt;Managed switch&lt;/h2&gt;
&lt;p&gt;Now that we understand the concept of VLANs, how do we use it? &lt;/p&gt;
&lt;p&gt;First of all, you need a &lt;em&gt;managed&lt;/em&gt; network switch that supports VLANs.&lt;/p&gt;
&lt;p&gt;The cheapest switch with VLAN support I could find is the TP-LINK TL-SG105E, for around 25 euros or dollars. This is a 5-port switch, but the 8-port version is often only a few euros/dolars more.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Juan Pedro Paredes in the comments point out that this TP-LINK switch may not be able to handle the large number of ARP requests that may arrive at the port connected to the Internet Modem. Others are quite negative about this switch in the Hacker News discussion (linked below). I'm not sure if Netgear switches, which are near the same price, fare any better.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;A switch like this has a web-based management interface that allows you to configure VLANS on the device. &lt;/p&gt;
&lt;h2&gt;Tagged vs untagged&lt;/h2&gt;
&lt;p&gt;In the context of VLANS, a network switch port can be in two states:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Member of a particular network (VLAN)  (untagged)&lt;/li&gt;
&lt;li&gt;Transporting multiple networks (VLANs) (tagged)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If a port is just a member of a VLAN, it just behaves like any other switch port. In this mode, it can &lt;em&gt;obviously&lt;/em&gt; only be a member of one network/VLAN. The VLAN tags are stripped off all network traffic coming out of this port. &lt;/p&gt;
&lt;p&gt;However, a port that is assigned 'tagged' VLAN trafic, just forwarded traffic as-is, including their VLAN tag. &lt;/p&gt;
&lt;p&gt;This is the trick that we use to send network packets from different networks (VLANS) to our Raspberry Pi router over a single port/wire. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/pirouter/pirouter02.png"&gt;&lt;img alt="routersetup" src="https://louwrentius.com/static/images/pirouter/pirouter02s.png" /&gt;&lt;/a&gt;
&lt;em&gt;click on the image for a larger version&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;So let's unpack this picture together, step by step.&lt;/p&gt;
&lt;p&gt;Let's imagine a (return) packet from the Internet arrives at the modem and is sent into switchport 1. &lt;/p&gt;
&lt;p&gt;The switch knows that any traffic on that switch port belongs to VLAN 10. Since this traffic needs to be send towards the Pi Router, it will put a tag on the packet and forwards the packet, including the tag towards the Pi on switch port 2.&lt;/p&gt;
&lt;p&gt;The Pi - in turn -  is configured to work with VLANs just as the switch. The tag on the packet tells the Pi to wich &lt;em&gt;virtual&lt;/em&gt; interface the packet must be send.&lt;/p&gt;
&lt;p&gt;A netplan configuration example to illustrate this setup:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;network&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;ethernets&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;enp2s0f0&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="n"&gt;dhcp4&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;no&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;vlans&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;enp2s0f0&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="n"&gt;link&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;enp2s0f0&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="n"&gt;addresses&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;68.69&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;70.71&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;internet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;address&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="n"&gt;gateway4&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;68.69&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;70.1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fake&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;upstream&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ISP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;router&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;enp2s0f0&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="n"&gt;link&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;enp2s0f0&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="n"&gt;addresses&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;192.168&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;internal&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;network&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;address&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;acting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;As you can see, the VLAN packets that arrive as tagged packets, are send (without their tags) to a virtual network interface belonging to that particular network. Those virtual network interfaces all share the same physical interface (enp2s0f0). The virtual network interfaces are just the physical interface name with ".(VLAN ID)" added. &lt;/p&gt;
&lt;p&gt;From here on out, you probably understand where this is going: those two virtual network interfaces are basically similar to a setup with two physical network interfaces. So all the routing and NAT that needs to happen, just happens on those two virtual interfaces instead. &lt;/p&gt;
&lt;h2&gt;How to work with VLANs&lt;/h2&gt;
&lt;p&gt;To work with VLANs, you need a &lt;em&gt;managed&lt;/em&gt; switch that supports VLANs. A managed switch has a management interface, often a web-based management interface.&lt;/p&gt;
&lt;p&gt;In this example, I'm using the TP-LINK TL-SG105E switch as an example. 
To get to this page, go to VLAN --&amp;gt; 802.1Q VLAN in the web interface.&lt;/p&gt;
&lt;p&gt;&lt;img alt="vlanconfig" src="https://louwrentius.com/static/images/pirouter/vlanconfig.png" /&gt;&lt;/p&gt;
&lt;p&gt;So from this table we can derive that: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Port 1 is an untagged member of VLAN 10&lt;/li&gt;
&lt;li&gt;Port 2 is a tagged member of VLAN 10 and VLAN 20&lt;/li&gt;
&lt;li&gt;Port 3 is an untagged member of VLAN 20&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Please note that it is also recommended to remove ports from VLANs they don't use. So I removed ports 1, 2 and 3 from the default VLAN 1.&lt;/p&gt;
&lt;p&gt;Now, if you have more devices to connect to the internal LAN on this switch, you need to configure the ports to be an untagged member of VLAN 20.&lt;/p&gt;
&lt;h2&gt;Caveats&lt;/h2&gt;
&lt;h3&gt;Bandwidth impact&lt;/h3&gt;
&lt;p&gt;Obviously, if you use a single interface, you only get to use the bandwidth of that sinle interface. In most cases, this is not an issue, as gigabit ethernet is full-duplex: there is physical exclusive wiring for upstream traffic and downstream traffic. &lt;/p&gt;
&lt;p&gt;So you might say that full-duplex gigabit ethernet has a raw throughput capacity of two gigabit/s, although we mostly don't talk about it that way.&lt;/p&gt;
&lt;p&gt;So when you download at 200 Mbit/s, that traffic is ingested over VLAN 10 over the incomming traffic path. It is then sent out over VLAN 20 towards your computer over VLAN 20 using the outgoing path. No problem there.&lt;/p&gt;
&lt;p&gt;If you would also use the Raspberry Pi as a backup server (with an attached external hard drive), the backup traffic and the internet traffic could both 'fight' for bandwidth on the same gigabit link. &lt;/p&gt;
&lt;h3&gt;Impact on gigabit internet&lt;/h3&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update June 2022&lt;/strong&gt; I was actually able to use full Gigabit internet speed over VLANs, at around 111 MB/s. I made some mistakes during earlier testing.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;You will never get the full gigabit internet network speed if you would build this setup. It will probably max out at ~900 Mbit. (I'm assuming here that you would use x86 hardware as the Pi would not be able to handle firewalling this traffic anyway.)&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;This is because most traffic is based on TCP connections and when you download, there is traffic both ways!. The download traffic is the bulk of the traffic, but there is a substantial steady stream of return packets that acknowledges to the sender that traffic has been received (if not, it would trigger a retransmission).&lt;/p&gt;
&lt;p&gt;Remember that in this single-port setup, the Pi uses the same gigabit port to send the return traffic to the internet over VLAN 10 and the download data  towards your home computer over VLAN 20. So the size of the upstream traffic will limit your maximum download performance.  &lt;/p&gt;
&lt;h2&gt;The Raspberry Pi 4 Model B as a router&lt;/h2&gt;
&lt;p&gt;The biggest limitation - which becomes an issue for more and more people - is performance. If you use IPTABLES on Linux for firewalling, in my experience, network throughput drops to a maximum of 650 Mbit/s. &lt;/p&gt;
&lt;p&gt;That's only an issue (first world problems) if you have gigabit internet or an internet speed beyond what the Pi can handle. &lt;/p&gt;
&lt;p&gt;If your internet speed doesn't even come close, this is not an issue at all. &lt;/p&gt;
&lt;p&gt;Maybe the Raspberry Pi 400 or the compute module performs better in this regard as their CPUs are clocked at higher Ghz. &lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;If it makes any sense for you to implement this setup, is only for you to decide. I'm running this kind of setup (using an x86 server) for 10 years as I can't run a second cable from my modem to the room where my router lives. For a more detailed picture of my home network setup, &lt;a href="https://louwrentius.com/my-home-network-setup-based-on-managed-switches-and-vlans.html"&gt;take a look here&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;Feel free to leave any questions of comments below.&lt;/p&gt;
&lt;p&gt;The hacker news discussion about this article &lt;a href="https://news.ycombinator.com/item?id=28696845"&gt;can be found here&lt;/a&gt;. &lt;/p&gt;
&lt;h2&gt;Route-on-a-stick&lt;/h2&gt;
&lt;p&gt;I learned from the hacker news discussion that a router with just one network interface is called a &lt;a href="https://en.wikipedia.org/wiki/Router_on_a_stick"&gt;router on a stick&lt;/a&gt;.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:older"&gt;
&lt;p&gt;Older models of the Raspberry Pi are significantly network bandwidth constrained. So much so, that they would not be suitable as Internet routers if your internet speed is above 100Mbit.&amp;#160;&lt;a class="footnote-backref" href="#fnref:older" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:more"&gt;
&lt;p&gt;most cheap switches can't operate more than 32 ~ 64 VLANs maximum. Only more expensive, enterprise gear can work with the full 4096 VLANS at the same time. However, this is probably not relevant for consumers.&amp;#160;&lt;a class="footnote-backref" href="#fnref:more" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Networking"></category><category term="networking"></category></entry><entry><title>A practical understanding of lead acid batteries</title><link href="https://louwrentius.com/a-practical-understanding-of-lead-acid-batteries.html" rel="alternate"></link><published>2021-08-29T12:00:00+02:00</published><updated>2021-08-29T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2021-08-29:/a-practical-understanding-of-lead-acid-batteries.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The goal of this article is to give you a &lt;em&gt;practical&lt;/em&gt; understanding Lead Acid batteries. We won't address the underlying chemistry, we'll treat them as a black-box and we will discover their characteristics and how to keep them healthy. &lt;/p&gt;
&lt;p&gt;&lt;a title="Cjp24, CC BY-SA 4.0 &amp;lt;https://creativecommons.org/licenses/by-sa/4.0&amp;gt;, via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File:Lead-acid_automotive_battery,_55_Ah.jpg"&gt;&lt;img width="512" alt="Lead-acid automotive battery, 55 Ah" src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/Lead-acid_automotive_battery%2C_55_Ah.jpg/512px-Lead-acid_automotive_battery%2C_55_Ah.jpg"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Disclaimer&lt;/h2&gt;
&lt;p&gt;I'm an amateur. I have absolutely zero relevant background …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The goal of this article is to give you a &lt;em&gt;practical&lt;/em&gt; understanding Lead Acid batteries. We won't address the underlying chemistry, we'll treat them as a black-box and we will discover their characteristics and how to keep them healthy. &lt;/p&gt;
&lt;p&gt;&lt;a title="Cjp24, CC BY-SA 4.0 &amp;lt;https://creativecommons.org/licenses/by-sa/4.0&amp;gt;, via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File:Lead-acid_automotive_battery,_55_Ah.jpg"&gt;&lt;img width="512" alt="Lead-acid automotive battery, 55 Ah" src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/Lead-acid_automotive_battery%2C_55_Ah.jpg/512px-Lead-acid_automotive_battery%2C_55_Ah.jpg"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Disclaimer&lt;/h2&gt;
&lt;p&gt;I'm an amateur. I have absolutely zero relevant background in battery technology or electronics. I just scraped some information together in a hopefully useful manner. &lt;/p&gt;
&lt;h2&gt;A high-level overview of the lead acid battery&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;It can provide a &lt;em&gt;ton&lt;/em&gt; of current / power&lt;/li&gt;
&lt;li&gt;It &lt;em&gt;hates&lt;/em&gt; to be &lt;em&gt;deep&lt;/em&gt;-discharged and will die quickly if done repeatedly&lt;/li&gt;
&lt;li&gt;It &lt;em&gt;hates&lt;/em&gt; being in a discharged state&lt;/li&gt;
&lt;li&gt;Only use 50% of total capacity if longevity matters (ideally only 30%)&lt;/li&gt;
&lt;li&gt;It's usable capacity depends on the load &lt;/li&gt;
&lt;li&gt;They are &lt;em&gt;slow&lt;/em&gt; to charge (8-12 hours)&lt;/li&gt;
&lt;li&gt;They don't perform as well in cold weather&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Lead acid batteries can provide a lot of current&lt;/h2&gt;
&lt;p&gt;Lead acid batteries can put out so much current that you can use them to weld&lt;sup id="fnref:should"&gt;&lt;a class="footnote-ref" href="#fn:should"&gt;2&lt;/a&gt;&lt;/sup&gt;. They are widely used in ICE cars to power the starter motor, which needs hundreds of amps at 12 volt to turn over the engine. &lt;/p&gt;
&lt;p&gt;They are also used to power mobility scooters, golf carts, trolly motors, small toy cars for children to ride in, or provide electricity on boats, caravans and in RVs. You can also find them in more stationary applications such in &lt;a href="https://en.wikipedia.org/wiki/Uninterruptible_power_supply"&gt;UPS systems&lt;/a&gt;&lt;sup id="fnref:upsnote"&gt;&lt;a class="footnote-ref" href="#fn:upsnote"&gt;1&lt;/a&gt;&lt;/sup&gt; or - of course - solar battery banks. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Danger&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Lead acid batteries typically don't have any kind of short-circuit protection build-in. This means that if you (accidentally) short-circuit a lead acid battery, the battery can explode or it can cause a fire. Whatever object caused the short-circuit, will probably be destroyed.&lt;/p&gt;
&lt;p&gt;Because lead acid batteries can supply such high currents, it's important to assure that you use the right wire thickness / diameter. If the wire is too thin, it causes too much resistance and thus may overheat, causing the insulation to catch fire.&lt;/p&gt;
&lt;p&gt;Lead acid batteries can be very dangerous, so you have to be very carefull with them. Personally, I always make sure that anything connected to a lead acid battery is properly fused.&lt;/p&gt;
&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/DpQeDcEpEn0?start=173" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;

&lt;p&gt;&lt;br&gt;&lt;/p&gt;
&lt;h2&gt;Lead acid batteries &lt;em&gt;hate&lt;/em&gt; being deep discharged&lt;/h2&gt;
&lt;p&gt;The common rule of thumb is that a lead acid battery should not be discharged below 50% of capacity, or ideally not beyond 70% of capacity. This is because lead acid batteries age / wear out faster if you deep discharge them. &lt;/p&gt;
&lt;p&gt;The most important lesson here is this: &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Although a lead acid battery may have a stated capacity of 100Ah, it's practical usable capacity is only 50Ah or even just 30Ah&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you buy a lead acid battery for a particular application, you probably expect a certain lifetime from it, probably in years. If the battery won't last this long, it may not be an economically viable solution.&lt;/p&gt;
&lt;p&gt;&lt;img alt="imagedod" src="https://louwrentius.com/static/images/depthofdischargeus.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;a href="https://www.cedgreentech.com/article/how-does-depth-discharge-factor-grid-connected-battery-systems"&gt;image source&lt;/a&gt; - Please note that this chart is based on a heavy-duty lead acid battery and doesn't reflect the lifecycle of a regular consumer lead acid battery. It is advised to look up the relevant chart for the particular battery model you may be interested in buying.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you cycle a battery (with the characteristics depicted in the chart) every day as part of some kind of off-grid solar setup and you use 80% of it's capacity, you'll probably have to replace it after about two years. &lt;/p&gt;
&lt;p&gt;If you add a few extra batteries in parallel, individual batteries may only be used 20% to 30% of capacity, and those same batteries may last 6 - 9 years. So by spending 2 or 3 times the money on batteries, you get 3 to 4 times the lifetime out of your setup.&lt;/p&gt;
&lt;p&gt;So, for example, if you really need 100Ah of battery capacity, you may need &lt;em&gt;two&lt;/em&gt; 100Ah batteries in &lt;em&gt;parallel&lt;/em&gt; to assure longevity. You even may decide to buy &lt;em&gt;three&lt;/em&gt; 100Ah batteries just to assure that they will last for the desired number of cycles.&lt;/p&gt;
&lt;p&gt;However, if the battery setup is only meant for &lt;em&gt;emergency power&lt;/em&gt; and thus only expected to operate a few times a year, discharging a lead acid battery to 80% of capacity is not a big deal. There is no need to add extra battery capacity because the number of charge/discharge cycles is so low that there isn't that much wear on the battery.&lt;/p&gt;
&lt;h2&gt;Lead acid batteries eventually die from old age&lt;/h2&gt;
&lt;p&gt;A lead acid battery deteriorates just by ageing. So even if it's kept full charged most of the time, it will wear out and needs to be replaced after a few years. It doesn't matter how well you treat them, even with the best care, they need to be replaced eventually.&lt;/p&gt;
&lt;h2&gt;Lead acid batteries &lt;em&gt;hate&lt;/em&gt; being in a discharged state&lt;/h2&gt;
&lt;p&gt;Lead acid batteries should &lt;em&gt;never&lt;/em&gt; stay discharged for a long time, ideally not longer than a &lt;em&gt;day&lt;/em&gt;. It's best to &lt;em&gt;immediately&lt;/em&gt; charge a lead acid battery after a (partial) discharge to keep them from quickly deteriorating. &lt;/p&gt;
&lt;p&gt;A battery that is in a discharged state for a long time (many months) will probably never recover or ever be usable again even if it was new and/or hasn't been used much.&lt;/p&gt;
&lt;h2&gt;Usable capacity depends on the load&lt;/h2&gt;
&lt;p&gt;A typical 12-volt battery has a rating stated in &lt;a href="https://en.wikipedia.org/wiki/Ampere_hour"&gt;ampere hour&lt;/a&gt; that tells you the capacity. For example, a battery can be rated as 70Ah. &lt;/p&gt;
&lt;p&gt;So this could mean that the battery can sustain a load of 7A for 10 hours or 70A for one hour, right? &lt;/p&gt;
&lt;p&gt;Unfortunately, &lt;em&gt;no&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;It turns out that the usable capacity of a lead acid battery &lt;em&gt;depends on the applied load&lt;/em&gt;. Therefore, the stated capacity is actually the capacity at a certain load that would deplete the battery in 20 hours. &lt;/p&gt;
&lt;p&gt;This is concept of the &lt;a href="https://en.wikipedia.org/wiki/Electric_battery#C_rate"&gt;C-rate&lt;/a&gt;. 1C is the theoretical one hour discharge rate based on the capacity. Batteries are mostly sold with a capacity based on a 0.05C discharge rate for 20 hours.&lt;/p&gt;
&lt;p&gt;The C-rate is important because the C-rate is related to the &lt;em&gt;usable&lt;/em&gt; capacity of a battery. That 70Ah capacity rating is based on a &lt;em&gt;0.05 C-rate or 20-hour discharge rate.&lt;/em&gt; That would be 70Ah / 20 = 3.5A.&lt;/p&gt;
&lt;p&gt;This is important to understand: if you would put a higher load on this battery, the usable capacity will be &lt;strong&gt;less&lt;/strong&gt; than 70Ah. For example, with a 7A load, the usable capacity may only be 64Ah (fake number for illustration purposes).&lt;/p&gt;
&lt;p&gt;It also works in your favor: if the load is &lt;em&gt;less&lt;/em&gt; than the 0.05 C-rate, the actual usable capacity will be higher!&lt;/p&gt;
&lt;p&gt;So &lt;em&gt;why&lt;/em&gt; is this? &lt;/p&gt;
&lt;p&gt;When you put a load on a battery, the &lt;em&gt;voltage drops&lt;/em&gt; a bit. Higher loads cause larger voltage drops, or to put it differently: the battery 'struggles' to maintain voltage.&lt;/p&gt;
&lt;p&gt;&lt;img alt="socunderload" src="https://louwrentius.com/static/images/socdischarge.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;a href="https://www.scubaengineer.com/documents/lead_acid_battery_charging_graphs.pdf"&gt;Image source&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;So if a load exceeds the standard 0.05C rate (C/20), you may have to select a higher capacity battery or accept a shorter run-time than you might expect based on the rated capacity on the label. &lt;/p&gt;
&lt;p&gt;You even may consider putting multiple batteries in parallel to reach the desired &lt;em&gt;usable&lt;/em&gt; capacity / runtime.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;WARNING&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The chart about the state-of-charge under load shows that you should keep an eye on the actual load and voltage. With a C/20 load, the battery is at 50% at 12.30 volt&lt;sup id="fnref:highernumbers"&gt;&lt;a class="footnote-ref" href="#fn:highernumbers"&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;A C/5 load on a 70Ah battery would be 14A. At that load, the battery is at 50% capacity at ~11.55 Volt under load. Only the load in combination with the voltage may give an indication of actual state-of-charge.&lt;/p&gt;
&lt;p&gt;Predicting state-of-charge under load is doable  with a static, constant load, but becomes more difficult when the load fluctuates, so take this into account.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;ANOTHER WARNING&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Different manufacturers produce different batteries that may have different discharge characteristics. This means that you should look up the battery specifications and hopefully find a discharge rate chart that will help you gauge actual capacity under load for this particular model.&lt;/p&gt;
&lt;h2&gt;How do you know the state of charge of a lead acid battery?&lt;/h2&gt;
&lt;p&gt;The state of charge is measured at rest: when the battery is not connected to any load or charger for 24 hours. The voltage will reflect the state of charge (SoC). &lt;/p&gt;
&lt;p&gt;&lt;em&gt;WARNING&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;There are many different, conflicting tables to be found on the internet that correlate voltage with a particular state of charge. Be sure you check that you pick the right one, consult the footnote&lt;sup id="fnref:warning"&gt;&lt;a class="footnote-ref" href="#fn:warning"&gt;4&lt;/a&gt;&lt;/sup&gt; for more information.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;State of Charge (SoC)&lt;/th&gt;
&lt;th&gt;Voltage at rest (24h)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;td&gt;12.70+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;75%&lt;/td&gt;
&lt;td&gt;12.40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;50%&lt;/td&gt;
&lt;td&gt;12.20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25%&lt;/td&gt;
&lt;td&gt;12.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;td&gt;11.80&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Please note that this table is only valid at an ambient temperature of 25C / 77F. If the temperature is lower, usable capacity diminishes and the voltages at wich a certain SoC is reached, will be higher.&lt;/p&gt;
&lt;p&gt;Furthermore, these numbers can deviate a little bit depending on the kind of lead acid battery.&lt;/p&gt;
&lt;p&gt;If you measure the voltage under load - for example, when you power some lights - the voltage does not reflect the actual state of charge. &lt;/p&gt;
&lt;p&gt;It is quite difficult to determine the state of charge under load. Sometimes, battery manufactures provide a discharge chart that allows you to determine the state-of-charge based on the current load. &lt;/p&gt;
&lt;p&gt;But often it is something you have to measure or figure out yourself. A constant load makes estimating battery capacity under load more predictable, but if the load varies, it is more difficult to accurately gauge the state of charge. &lt;/p&gt;
&lt;h2&gt;The positive impact on capacity of connecting batteries in parallel&lt;/h2&gt;
&lt;p&gt;By using multiple batteries in parallel, the load is also shared across all batteries. Each individual battery only has to supply a fraction of the total load. This means that in addition to the extra usable capacity of the added batteries, there is also added usable capacity because of the reduced load on each individual battery.&lt;/p&gt;
&lt;p&gt;For example, if a 100Ah battery has a 0.05C discharge rate of 5A. If it has to provide 10A, the usable capacity is lower than the advertised 100Ah as explained earlier. If we add a second 100A battery in parallel, each battery now needs to supply only half of the load and thus will be able to provide the stated capacity as it is precisely the 0.05C discharge rate.&lt;/p&gt;
&lt;h2&gt;Lead acid batteries need deep discharge protection&lt;/h2&gt;
&lt;p&gt;It is highly recommended to use lead acid batteries in combination with a low-voltage cut-off solution that protects the battery against deep discharge&lt;sup id="fnref:noteprotect"&gt;&lt;a class="footnote-ref" href="#fn:noteprotect"&gt;5&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img alt="batteryprotect" src="https://louwrentius.com/static/images/batteryprotect.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;this article is not sponsored by victron&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Ideally you can configure the cut-off coltage, such as with the depicted unit.&lt;/p&gt;
&lt;p&gt;So many lead acid batteries are 'murdered' because they are left connected (accidentally) to a power 'drain'. &lt;/p&gt;
&lt;h2&gt;Charging a lead acid battery&lt;/h2&gt;
&lt;p&gt;No matter the size, lead acid batteries are relatively slow to charge. It may take around 8 - 12 hours to fully charge a battery from fully depleted. It's not possible to just dump a lot of current into them and charge them quickly. That would just overload and destroy the battery&lt;sup id="fnref:destroy"&gt;&lt;a class="footnote-ref" href="#fn:destroy"&gt;8&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Lead acid batteries need a specific &lt;a href="http://www.trojanbattery.com/pdf/U.S.%20Battery%20Charge%20Profile%20Full%20%2011-12-13.pdf"&gt;3-stage charge process&lt;/a&gt;&lt;sup id="fnref:wiki3stage"&gt;&lt;a class="footnote-ref" href="#fn:wiki3stage"&gt;6&lt;/a&gt;&lt;/sup&gt; in order to preserve their condition. &lt;/p&gt;
&lt;p&gt;In practice, if you don't discharge a battery beyond 50%, it takes less time to recharge the battery&lt;sup id="fnref:lithiumsidenote"&gt;&lt;a class="footnote-ref" href="#fn:lithiumsidenote"&gt;7&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;It can be a good idea to hookup unused batteries permanently to a 'tricklecharger'. This is a charger that charges the battery with a maximum current of 0.8A.&lt;/p&gt;
&lt;p&gt;As it can take a very long time to charge a larger capacity battery with a tricklecharger, you need a regular charger, that can supply a decent current, to charge a battery 'within a reasonable timeframe'.&lt;/p&gt;
&lt;h2&gt;Lead acid battery types&lt;/h2&gt;
&lt;h3&gt;Flooded / FLA&lt;/h3&gt;
&lt;p&gt;This is the well-known older type of battery. It may be necessary to add distilled water from time to time, so they require maintenance. &lt;/p&gt;
&lt;p&gt;The key problem with batteries that require maintenance is that most people (consumers) don't know and if they know, they forget. These batteries basically don't match well with 'human nature'.&lt;/p&gt;
&lt;p&gt;It seems to me that these batteries are on their way out in the consumer space, but are still prevalent in commercial/industrial application. It's probably easy for a business to just have a trained employee or service company periodically maintain the batteries.&lt;/p&gt;
&lt;h3&gt;EFB or Enhanced Flooded Battery&lt;/h3&gt;
&lt;p&gt;These batteries are improved versions of the regular flooded battery. They are more expensive, but will last more charge/discharge cycles, especially with deeper discharges. &lt;/p&gt;
&lt;p&gt;Although not as performant as AGM batteries (which will be discussed shortly), they provide a cheaper alternative to AGM batteries.&lt;/p&gt;
&lt;h3&gt;Sealed Lead Acid&lt;/h3&gt;
&lt;p&gt;This type of battery is fully sealed. SLA batteries essentially the same as VRLA batteries but this name is used for the smaller capacity batteries, as found in motorcycles, uninterruptible power supplies and such.&lt;/p&gt;
&lt;p&gt;These are maintenance-free batteries. They never require any maintenance during their lifetime. You don't need to add distilled water or anything during their lifetime.&lt;/p&gt;
&lt;h3&gt;Valve-Regulated Lead Acid&lt;/h3&gt;
&lt;p&gt;This name is used for batteries like the SLA battery, but with higher capacities. See also &lt;a href="https://en.wikipedia.org/wiki/VRLA_battery"&gt;wikipedia&lt;/a&gt;. They have liquid inside like the flooded battery, but they are sealed and don't need any maintenance. To be precise: they can't be maintained, only be replaced.&lt;/p&gt;
&lt;p&gt;The 'valve(s)' are only there in case of emergency, to release pressure due to gas buildup within the battery case if charged incorrectly.&lt;/p&gt;
&lt;h3&gt;AGM (Absorbent Glass Mat)&lt;/h3&gt;
&lt;p&gt;This is also a fully sealed SLA/VRLA battery, but it is even more advanced. 
They are better able to withstand deep discharges and can be recharged faster. This comes at a relatively steep price.&lt;/p&gt;
&lt;p&gt;The faster recharge cycle can be important if used within a solar power bank, because there are only a limited number of hours when the sun provides enough energy for charging. &lt;/p&gt;
&lt;h3&gt;Deep-Cycle&lt;/h3&gt;
&lt;p&gt;These batteries are build differently&lt;sup id="fnref:deepc"&gt;&lt;a class="footnote-ref" href="#fn:deepc"&gt;9&lt;/a&gt;&lt;/sup&gt; and are less suited for starting cars, but better suited to provide power to power boats, RC vans or form a solar power bank. &lt;/p&gt;
&lt;p&gt;They are often not a kind of battery in and of itself: there are just regular flooded deep-cycle batteries, or AGM deep-cycle batteries. They are often specifically designed for solar power banks or similar applications.&lt;/p&gt;
&lt;h3&gt;Evaluation&lt;/h3&gt;
&lt;p&gt;Although regular flooded batteries will have the longest lifespan of all lead acid battery technology, they require regular maintenance and that may not be practical. Therefore, AGM or other maintenance-free batteries are better suited for residential battery applications, the relatively lower life expectancy is just the price for practicality/convenience.&lt;/p&gt;
&lt;h2&gt;Low self-discharge rate and storing batteries&lt;/h2&gt;
&lt;p&gt;Lead acid batteries needs to be stored fully charged. They should be recharged at least every six months due to self-discharge, although the self-discharge rate is rather low. &lt;/p&gt;
&lt;h2&gt;Buyer beware - ask for fresh batteries&lt;/h2&gt;
&lt;p&gt;I've ordered quite a few smaller SLA batteries from various brands to test their capacities. I noticed that the actual brand didn't matter much. The age of the battery seemed to matter. &lt;/p&gt;
&lt;p&gt;&lt;img alt="mybat" src="https://louwrentius.com/static/images/testedbatteries.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;some of the tested SLA batteries&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;While they are in storage at the vendor, they are probably never recharged, which deteriorates the battery. The batteries with a lower SoC correlated with a serial number that indicated that they were older than the other batteries.&lt;/p&gt;
&lt;p&gt;So it might be beneficial to specifically ask for a 'fresh' battery when you order a lead acid battery.&lt;/p&gt;
&lt;h2&gt;Q &amp;amp; A&lt;/h2&gt;
&lt;h3&gt;Can my lead acid battery be revived?&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;No.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If the voltage of a 12 volt battery at rest is close to zero, it is dead.&lt;/p&gt;
&lt;p&gt;There are tips like 'using epsomsalts' or keeping them on a charger for weeks, but at best, you get only a small portion of usable capacity back, if any. A battery 'revived' like this should never power something you rely on. Personally I don't think it's worth the cost of epsom salt or your time, but you have to decide for yourself if that's true or not.&lt;/p&gt;
&lt;p&gt;If a battery is totally dead, I would recommend to accept the loss and get a new one.&lt;/p&gt;
&lt;h3&gt;The impact of cold weather on performance&lt;/h3&gt;
&lt;p&gt;If a lead acid battery is exposed to colder or even freezing temperatures, it will work fine, but it can output less current. This is relevant for older, more worn-down batteries. Such batteries can still work fine in the summer, but may no longer be able to start a car or provide another utility with sufficient power when temperatures drop significantly. &lt;/p&gt;
&lt;h3&gt;Does it make sense to use Lead acid batteries for an off-grid solar setup?&lt;/h3&gt;
&lt;p&gt;You can do a lead acid solar setup if you can get those batteries cheap but otherwise it may be better to go for a LiFePo4 based setup. Although the initial investment is much higher, Lithium-based batteries will be cheaper long-term because they last so much longer than lead acid batteries (life-time).&lt;/p&gt;
&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/Rp8Hspi4BC4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen&gt;&lt;/iframe&gt;

&lt;p&gt;I think lead acid batteries are suited for climates with a lot of sunlight available all year round, to power a livingspace through the night.&lt;/p&gt;
&lt;p&gt;Since lead acid batteries don't 'like' to be in a discharged state for a long time (more than a day at most), I don't think they are suitable for a more temperate climate, with lots of overcast days. &lt;/p&gt;
&lt;p&gt;So the first issue with lead acid batteries is that they don't take well being in a discharged state for more than a day or so. It will make them deteriorate faster.&lt;/p&gt;
&lt;p&gt;I think the second issue with lead acid batteries as a solar power bank is their &lt;em&gt;slow charging speed&lt;/em&gt;. Lead acid batteries often can't use all available solar power to charge because they just can't charge any faster, no matter their capacity.&lt;/p&gt;
&lt;p&gt;This means that even though there would have been enough energy available to fully charge the batteries, it was not available &lt;em&gt;long enough&lt;/em&gt; to fully charge the batteries. Maybe AGM batteries may help as they can be charged with higher currents, even though they may not last as long.&lt;/p&gt;
&lt;p&gt;Lithium-based batteries can be charged with very large currents and can - in some sense - capture every bit of sunlight that's available. This is &lt;em&gt;much better suited&lt;/em&gt; to climates with more intermittent sunny days or even sunny hours, I think. &lt;/p&gt;
&lt;p&gt;Another thing that comes to mind is that if you really want to go with lead acid batteries for a solar bank, flooded may be the longest lasting, but the regular maintenance they require may quickly become a chore / unmanageable. I have zero experience with this, but please verify this beforehand. All the more reason to consider at least maintenance-free lead acid batteries, even if they may not last as long.&lt;/p&gt;
&lt;p&gt;This is just my thought, I'm no expert on this. &lt;/p&gt;
&lt;p&gt;Just remember that regular car batteries are just not suitable for this application. You need - more expensive - batteries that are build specifically for being used in a power bank&lt;sup id="fnref:idea"&gt;&lt;a class="footnote-ref" href="#fn:idea"&gt;10&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;h3&gt;Why are lead acid batteries so widely used in cars?&lt;/h3&gt;
&lt;p&gt;Cars need a power source that can provide a lot of power to run the starter motor. Starter motors can use anywhere from 1.5 to 3 Kilowatt when cranking the engine. That's about 125A to 250A of current at 12 volts.&lt;/p&gt;
&lt;p&gt;You may notice that batteries are often rated for much higher CCA or 'Cold Cranking Amps' values, but since they deteriorate over time, that extra margin will come in handy. Especially in colder weather.&lt;/p&gt;
&lt;p&gt;Lead acid batteries as used in cars can last many years because they are used under near ideal conditions. They are always kept fully charged and are ony briefly and slightly discharged. They are immediately recharged after the car is started.&lt;/p&gt;
&lt;h3&gt;How can I check if a battery is healthy ?&lt;/h3&gt;
&lt;p&gt;You need a battery tester for this. They can be had for around 50 Euro's, which is not far off from just buying a new battery, which you might have to do anyway.&lt;/p&gt;
&lt;p&gt;A &lt;a href="https://www.youtube.com/watch?v=4DxJId8O1HA"&gt;demonstration video&lt;/a&gt; of such a cheap charger.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:upsnote"&gt;
&lt;p&gt;A UPS can be quite small, to power just a single computer, running off a 'small' 12 volt 7Ah lead acid battery (depicted further down below in the acticle). A step up in size would be a 19-inch rackmounted UPS, which can often be expanded with multiple external battery packs. A datacenter scale UPS is build using many large batteries in both series for higher voltages and in parallel for higher capacity. Lead acid batteries are well-suited for these type of applications because they are always kept fully charged and rarely (fully) discharged. In datacenter applications, they often only need to last until the diesel generators kick in.&amp;#160;&lt;a class="footnote-backref" href="#fnref:upsnote" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:should"&gt;
&lt;p&gt;Just because you can, doesn't mean you should. Don't do it.&amp;#160;&lt;a class="footnote-backref" href="#fnref:should" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:highernumbers"&gt;
&lt;p&gt;Notice the voltages in the C/20 discharge rate - which should reflect the numbers in the table shown earlier - are actually a bit higher. If you want to be safe, using higher voltages is always safer for battery longevity, but at the cost of usable capacity.&amp;#160;&lt;a class="footnote-backref" href="#fnref:highernumbers" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:warning"&gt;
&lt;p&gt;&lt;a href="https://marinehowto.com/under-load-battery-voltage-vs-soc/"&gt;This article&lt;/a&gt; goes into more detail about this. Be sure you look at a table that correlates resting voltage against SoC and not the voltage under load. If you see a table with 10.8 volts at 0%, you are looking at a table for under load voltages. A battery at 10.5 - 10.8 volts at rest is probably damaged. A lead acid battery should never be below 11.80 volt at rest.&amp;#160;&lt;a class="footnote-backref" href="#fnref:warning" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:noteprotect"&gt;
&lt;p&gt;'bad' battery protection solutions will just start to oscillate as the battery voltage recovers (above the cut-off threshold) when the load is removed. I bought a cheap 20 Euro unit and it was effectively useless because of this problem.&amp;#160;&lt;a class="footnote-backref" href="#fnref:noteprotect" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:wiki3stage"&gt;
&lt;p&gt;https://en.wikipedia.org/wiki/IUoU_battery_charging&amp;#160;&lt;a class="footnote-backref" href="#fnref:wiki3stage" title="Jump back to footnote 6 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:lithiumsidenote"&gt;
&lt;p&gt;If Lithium-based batteries have one big upside over lead acid batteries in energy storage applications, it might be this aspect: they can be charged much faster. It may make sense to oversize the solar power array just to charge the batteries as quickly as possible within the limited number of available 'sun-hours'.&amp;#160;&lt;a class="footnote-backref" href="#fnref:lithiumsidenote" title="Jump back to footnote 7 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:destroy"&gt;
&lt;p&gt;It is critical that a proper battery charger is used. You should never just apply a static current as overcharging the battery may lead to the buildup of flammable gasses like hydrogen. There are many documented cases of car batteries exploding in this way. Not only can you get hurt by debris, the internal liquid is acidic which can cause significant burns and is especially dangerous for the eyes.&amp;#160;&lt;a class="footnote-backref" href="#fnref:destroy" title="Jump back to footnote 8 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:deepc"&gt;
&lt;p&gt;They have ticker plates that are better able to withstand deep discharges at the cost of lower peak current.&amp;#160;&lt;a class="footnote-backref" href="#fnref:deepc" title="Jump back to footnote 9 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:idea"&gt;
&lt;p&gt;I myself do use regular car batteries as part of my solar-powered blog because I got them for free and even if they are shot, they may last for quite a bit. I can also imagine that people would actually build a battery bank made of old car batteries and just ad a whole lot of them, if you have the space. I'm not sure if that kind of setup would be quite reliable.&amp;#160;&lt;a class="footnote-backref" href="#fnref:idea" title="Jump back to footnote 10 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:mix"&gt;
&lt;p&gt;The car batteries are free, and I had no other use for the gel batteries so I hooked those up too (in parallel). The batteries have wildly different capacities and this is absolutely not recommended. If you hook up batteries in parallel, always use the same capacity.&amp;#160;&lt;a class="footnote-backref" href="#fnref:mix" title="Jump back to footnote 11 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Solar"></category><category term="Battery"></category></entry><entry><title>ZFS RAIDZ expansion is awesome but has a small caveat</title><link href="https://louwrentius.com/zfs-raidz-expansion-is-awesome-but-has-a-small-caveat.html" rel="alternate"></link><published>2021-06-22T12:00:00+02:00</published><updated>2021-06-22T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2021-06-22:/zfs-raidz-expansion-is-awesome-but-has-a-small-caveat.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update April 2023:&lt;/strong&gt;
It has been fairly quiet since the announcement of this feature.
The Github &lt;a href="https://github.com/openzfs/zfs/pull/12225"&gt;PR&lt;/a&gt; about this feature is rather stale and people are wondering what the status is and what the plans are. Meanwhile, FreeBSD has announced In February 2023 that they suspect to integrate RAIDZ …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update April 2023:&lt;/strong&gt;
It has been fairly quiet since the announcement of this feature.
The Github &lt;a href="https://github.com/openzfs/zfs/pull/12225"&gt;PR&lt;/a&gt; about this feature is rather stale and people are wondering what the status is and what the plans are. Meanwhile, FreeBSD has announced In February 2023 that they suspect to integrate RAIDZ expansion by Q3. &lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;One of my most popular blog articles is &lt;a href="https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html"&gt;this article&lt;/a&gt; about the "Hidden Cost of using ZFS for your home NAS". To summarise the key argument of this article:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Expanding ZFS-based storge can be relatively expensive / inefficient.&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;For example, if you run a ZFS pool based on a single 3-disk RAIDZ vdev (RAID5 equivalent&lt;sup id="fnref:raidz"&gt;&lt;a class="footnote-ref" href="#fn:raidz"&gt;2&lt;/a&gt;&lt;/sup&gt;), the only way to expand a pool is to add another 3-disk RAIDZ vdev&lt;sup id="fnref:rd"&gt;&lt;a class="footnote-ref" href="#fn:rd"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;You can't just add a single disk to the existing 3-disk RAIDZ vdev to create a 4-disk RAIDZ vdev because vdevs can't be expanded.&lt;/p&gt;
&lt;p&gt;The impact of this limitation is that you have to buy all storage upfront even if you don't need the space for years to come.&lt;/p&gt;
&lt;p&gt;Otherwise, by expanding with additional vdevs you lose capacity to parity you may not really want/need, which also limits the maximum usable capacity of your NAS.&lt;/p&gt;
&lt;h2&gt;RAIDZ vdev expansion&lt;/h2&gt;
&lt;p&gt;Fortunately, this limitation of ZFS is being addressed! &lt;/p&gt;
&lt;p&gt;ZFS founder Matthew Ahrens created a &lt;a href="https://github.com/openzfs/zfs/pull/12225"&gt;pull request&lt;/a&gt; around June 11, 2021 detailing a new ZFS feature that would allow for RAIDZ vdev expansion.&lt;/p&gt;
&lt;p&gt;Finally, ZFS users will be able to expand their storage by adding just one single drive at a time. This feature will make it possible to expand storage as-you-go, which is especially of interest to budget conscious home users&lt;sup id="fnref:larger"&gt;&lt;a class="footnote-ref" href="#fn:larger"&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://arstechnica.com/author/jimsalter/"&gt;Jim Salter&lt;/a&gt; has written a &lt;a href="https://arstechnica.com/gadgets/2021/06/raidz-expansion-code-lands-in-openzfs-master/"&gt;good article&lt;/a&gt; about this on Ars Technica.&lt;/p&gt;
&lt;h2&gt;There is still a caveat&lt;/h2&gt;
&lt;p&gt;Existing data will be &lt;em&gt;redistributed&lt;/em&gt; or &lt;em&gt;rebalanced&lt;/em&gt; over all drives, including the freshly added drive. However, the data that was already stored on the vdev will not be &lt;em&gt;restriped&lt;/em&gt; after the vdev is expanded. This means that this data is stored with the older, &lt;em&gt;less efficient&lt;/em&gt; parity-to-data ratio. &lt;/p&gt;
&lt;p&gt;I think Matthew Ahrends explains it best in his own words:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;After the expansion completes, old blocks remain with their old data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but distributed among the larger set of disks. New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ vdev&amp;#39;s &amp;quot;assumed parity ratio&amp;quot; does not change, so slightly less space than is expected may be reported for newly-written blocks, according to zfs list, df, ls -s, and similar tools.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So, if you add a new drive to a RAIDZ vdev, you'll notice that after expansion, you will have &lt;em&gt;less&lt;/em&gt; capacity available than you would theoretically expect.&lt;/p&gt;
&lt;p&gt;However, it is even more important to understand that this effect &lt;em&gt;accumulates&lt;/em&gt;. This is especially relevant for home users. &lt;/p&gt;
&lt;p&gt;I think that the whole concept of starting with a small number of disks and expand-as-you-go is very desirable and typical for home users. But this also means that every time a disk is added to the vdev, existing data is still stored with the old data-to-parity rate. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Imagine that we have a 10-drive chassis and we start out with a 4-drive RAIDZ2.&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;If we keep adding drives&lt;sup id="fnref:gofrom"&gt;&lt;a class="footnote-ref" href="#fn:gofrom"&gt;5&lt;/a&gt;&lt;/sup&gt; conform this example, until the chassis is full at 10 drives, about 1.35 drives worth of capacity is 'lost' to parity overhead/efficiency loss&lt;sup id="fnref:full"&gt;&lt;a class="footnote-ref" href="#fn:full"&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;That is quite a lot of overhead or loss of capacity, I think.&lt;/p&gt;
&lt;p&gt;How is this overhead calculated? If we would just buy 10 drives and create a 10-drive RAIDZ2 vdev, data-to-parity overhead is 20% meaning that 20% of the total raw capacity of the vdev is used for storing parity. This is the most efficient scenario in this case.&lt;/p&gt;
&lt;p&gt;When we start out with the four-drive RAIDZ2 vdev, the data-to-parity overhead is 50%. That's a 30% overhead difference compared to the 'ideal' 10-drive setup.&lt;/p&gt;
&lt;p&gt;As we keep adding drives, the relative overhead of the parity keeps dropping so we end up with 'multiple data sets' with different data-to-parity ratios, that are less efficient than the end-stage of 10 drives.&lt;/p&gt;
&lt;p&gt;I created a google sheet to roughly estimate this overhead for each stage, but my math was totally off. Fortunately, &lt;a href="https://www.truenas.com/community/members/yorick.90628/"&gt;Yorick&lt;/a&gt; rewrote the sheet, which &lt;a href="https://docs.google.com/spreadsheets/d/1qiDPfLN-K88FMHMxcgtkxswY5Wtu7h9tBAOgJfnO7VE/edit?usp=sharing"&gt;can be found here&lt;/a&gt;. Thanks Yorick! Further more, Truenas user DayBlur shared &lt;a href="https://www.truenas.com/community/threads/raidz-expansion-its-happening.58575/post-649578"&gt;additional insights&lt;/a&gt; on the calculations if you are interested in that.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://docs.google.com/spreadsheets/d/1qiDPfLN-K88FMHMxcgtkxswY5Wtu7h9tBAOgJfnO7VE/edit?usp=sharing"&gt;google sheet&lt;/a&gt; allows you to play with various variables to estimate how much capacity is lost for a given scenario. Please note that any losses that may arise because a number of drives is used that requires data to be padded - as discussed in the Ars Technica article - are not part of the calculation.&lt;/p&gt;
&lt;p&gt;It is a bit unfortunate that especially in the scenario of the home user who want to start small and expand-as-you go that this overhead manifests itself so much. But there is good news!&lt;/p&gt;
&lt;h2&gt;Lost capacity can be recovered!&lt;/h2&gt;
&lt;p&gt;The overhead or 'lost capacity' can be &lt;em&gt;recovered&lt;/em&gt; by &lt;em&gt;rewriting&lt;/em&gt; existing data after the vdev has been expanded, because the data will then be written with the more efficient parity-to-data ratio of the larger vdev.&lt;/p&gt;
&lt;p&gt;Rewriting all data may take quite some time and you may opt to postpone this step until the vdev has been expanded a couple of times so the parity-to-data ratio is now 'good enough' that significant storage gains can be had by rewriting the data.&lt;/p&gt;
&lt;p&gt;Because capacity lost to overhead can be fully recovered, I think that this caveat is relatively minor, especially compared to the old situation where we had to expand a pool with entire vdevs and there was no way to recover any overhead.&lt;/p&gt;
&lt;p&gt;There is currently no build-in mechanism to trigger this data rewrite as part of the native ZFS tools. This will be a manual process until somebody may create a script that automates this process. According to Matthew Ahrens, restriping the data as part of the vdev expansion process would be an effort &lt;a href="https://github.com/openzfs/zfs/pull/12225#issuecomment-860075460"&gt;of similar scale&lt;/a&gt; as the RAIDZ expansion itself.&lt;/p&gt;
&lt;h2&gt;Evaluation&lt;/h2&gt;
&lt;p&gt;I think it cannot be stated enough how &lt;em&gt;awesome&lt;/em&gt; the RAIDZ vdev expansion feature is, especially for home users who want to start small and grow their storage over time. &lt;/p&gt;
&lt;p&gt;Although the expansion process can accumulate quite a bit of overhead, that overhead can be recovered by rewriting existing data, which is probably not a problem for most people.&lt;/p&gt;
&lt;p&gt;Despite all the awesome features and capabilities of ZFS, I think quite a few home users went with other storage solutions because of the relatively high expansion cost/overhead. Now that this barrier will be overcome, I think that ZFS will be more accessible to the home user DIY NAS crowd.&lt;/p&gt;
&lt;h2&gt;Release timeline&lt;/h2&gt;
&lt;p&gt;According to the Ars Technica article by Jim Salter, this feature will probably become available in August 2022, so we need to have some patience. Even so, you might want to already decide to build your new DIY NAS based on ZFS: by the time you may need to expand your storage, the feature may be available!&lt;/p&gt;
&lt;h2&gt;Update on some - in my opinion - bad advice&lt;/h2&gt;
&lt;p&gt;The podcast &lt;a href="https://2.5admins.com"&gt;2.5 admins&lt;/a&gt; (which I enjoy listening to) discussed the topic of RAIDZ expansion in &lt;a href="https://2.5admins.com/2-5-admins-45/"&gt;episode 45&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There are two remarks made that I want to address, because I disagree with them.&lt;/p&gt;
&lt;h3&gt;Don't rewrite the data?&lt;/h3&gt;
&lt;p&gt;As in his Ars Technica article, Jim Salter keeps advocating not to bother rewriting the data after a vdev expansion, but I personally disagree with this advice. I hope I have demonstrated that if you keep adding drives, the parity overhead is significant enough for most home users to make it worthwhile to rewrite the data after a few drives have been added.&lt;/p&gt;
&lt;h3&gt;Just use mirrors!&lt;/h3&gt;
&lt;p&gt;I also disagree with the advice of &lt;a href="https://2.5admins.com"&gt;using mirrors&lt;/a&gt;, especially for home users&lt;sup id="fnref:whyilink"&gt;&lt;a class="footnote-ref" href="#fn:whyilink"&gt;6&lt;/a&gt;&lt;/sup&gt;. I personally think it is bad advice, because home users have other needs and desires as enterprise environments. &lt;/p&gt;
&lt;p&gt;If 'just use mirrors' is still the advice, why did Matthew Ahrends build the whole RAIDZ vdev expansion feature in the first place? I think the RAIDZ vdev expansion is really beneficial for home users.&lt;/p&gt;
&lt;p&gt;Maybe Jim and I have very different ideas about what a home user would want or need in a DIY NAS storage solution. I think that home users want this:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;As much storage as possible for as little money as possible with acceptable redundancy.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In addition, I think that home users in general work with larger files (multiple megabytes at least). And if they sometimes work with smaller files, they accept some performance loss due to the lower random I/O performance of single RAIDZ vdevs&lt;sup id="fnref:ssds"&gt;&lt;a class="footnote-ref" href="#fn:ssds"&gt;7&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;Frankly, to me it feels like the 'just use mirrors' advice is used to 'downplay' a significant limitation of ZFS&lt;sup id="fnref:ale"&gt;&lt;a class="footnote-ref" href="#fn:ale"&gt;8&lt;/a&gt;&lt;/sup&gt;. Jim is a prolific writer on Ars Technica and has a large audience so his advice matters. So that's why I think it's sad that he sticks with 'just use mirrors' while that's clearly not in the best interest of most home users. &lt;/p&gt;
&lt;p&gt;However, that's just my opinion, you decide for yourself what's best.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:rd"&gt;
&lt;p&gt;The other method is to replace all existing drives one by one with larger ones. Only after you have replaced all drives will you be able to gain extra capacity so this method has a similar downside as just expanding with extra vdevs: you must buy multiple drives at once. In addition, I think this method is rather time consuming and cumbersome although people do use it to expand capacity. And to be fair: you can indeed add 4+ disk vdevs, vdevs with a higher RAIDZ level or mirrors but none of that makes sense in this context.&amp;#160;&lt;a class="footnote-backref" href="#fnref:rd" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:raidz"&gt;
&lt;p&gt;Just to illustrate the level of redundancy in terms of how many disks can be lost and still be operational.&amp;#160;&lt;a class="footnote-backref" href="#fnref:raidz" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:larger"&gt;
&lt;p&gt;I personally think that it's even great for small and medium business owners. Only larger businesses want to keep adding relatively large vdevs consisting of multiple drives because if they keep expanding with just one drive at a time, they may have to expand capacity very frequently which may not be practical.&amp;#160;&lt;a class="footnote-backref" href="#fnref:larger" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:full"&gt;
&lt;p&gt;If you would only upgrade once the pool is almost full - not recommended! - that overhead grows to 1.69 drives.&amp;#160;&lt;a class="footnote-backref" href="#fnref:full" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:gofrom"&gt;
&lt;p&gt;So you go from four to five drives. Then from five to six drives, and so on.&amp;#160;&lt;a class="footnote-backref" href="#fnref:gofrom" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:whyilink"&gt;
&lt;p&gt;I link to the original article by Jim Salter because I want to allow you to read the article and make up your own mind and not just listen to me.&amp;#160;&lt;a class="footnote-backref" href="#fnref:whyilink" title="Jump back to footnote 6 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:ssds"&gt;
&lt;p&gt;If random I/O performance is important, it is probably wise to go for SSD based storage anyway.&amp;#160;&lt;a class="footnote-backref" href="#fnref:ssds" title="Jump back to footnote 7 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:ale"&gt;
&lt;p&gt;resolved by by ZFS vdev expansion obviously, when it lands in production.&amp;#160;&lt;a class="footnote-backref" href="#fnref:ale" title="Jump back to footnote 8 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="ZFS"></category><category term="Storage"></category></entry><entry><title>Recycle your old laptop display and turn it into a monitor</title><link href="https://louwrentius.com/recycle-your-old-laptop-display-and-turn-it-into-a-monitor.html" rel="alternate"></link><published>2021-03-13T12:00:00+01:00</published><updated>2021-03-13T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2021-03-13:/recycle-your-old-laptop-display-and-turn-it-into-a-monitor.html</id><summary type="html">&lt;p&gt;During a cleaning spree I decided that it was time to recycle two old laptops that were just collecting dust on a shelf for many years. Although I didn't have any purpose for them anymore, I realised that the displays were still perfectly fine.&lt;/p&gt;
&lt;p&gt;This is the display of my …&lt;/p&gt;</summary><content type="html">&lt;p&gt;During a cleaning spree I decided that it was time to recycle two old laptops that were just collecting dust on a shelf for many years. Although I didn't have any purpose for them anymore, I realised that the displays were still perfectly fine.&lt;/p&gt;
&lt;p&gt;This is the display of my old &lt;a href="https://en.wikipedia.org/wiki/MacBook_(2006–2012)"&gt;13" Intel-based MacBook&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img alt="screen" src="https://louwrentius.com/images/screen/screen01.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Somehow it felt wasteful to just throw the laptops out, I wondered if it would be possible to use these displays as a regular monitor? &lt;/p&gt;
&lt;p&gt;It turns out: &lt;a href="https://www.slashdigit.com/convert-old-laptop-screen-external-monitor/"&gt;yes this is possible and it is also quite simple&lt;/a&gt;. My blogpost covers the same topic, with a few more pictures&lt;sup id="fnref:hn"&gt;&lt;a class="footnote-ref" href="#fn:hn"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;There is a particular LCD driver board that can be found all over ebay and the well-known Chinese webshops.&lt;/p&gt;
&lt;p&gt;&lt;img alt="controller2" src="https://louwrentius.com/images/screen/controller02.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;These boards cost around $20 and come with most things you need to get the display operational. &lt;/p&gt;
&lt;p&gt;The board includes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A small controller print for the on-screen display (center)&lt;/li&gt;
&lt;li&gt;A high-voltage power supply for the backlight (left)&lt;/li&gt;
&lt;li&gt;The data cable to drive the actual display (right)&lt;/li&gt;
&lt;li&gt;Support for audio-passthrough + volume control (right corner)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The board doesn't include:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A 12-volt power supply&lt;/li&gt;
&lt;li&gt;Some kind of frame to make the display and board one easy to handle unit&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img alt="controller3" src="https://louwrentius.com/images/screen/controller03.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The board supports HDMI, VGA and DVI for signal input so it should work with almost any computer. &lt;/p&gt;
&lt;p&gt;This particular board (M.NT68676.2) is used to power many different panel models. Although the board itself may be the same, it's important to order the board that is specifically tailored to the particular LCD panel you have. The panels seem easy to identify. This is the Macbook LCD panel type (LP133WX1 TL A1):&lt;/p&gt;
&lt;p&gt;&lt;img alt="model" src="https://louwrentius.com/images/screen/model.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;That TL/A1 part of the model is critical to finding the appropriate controller. &lt;/p&gt;
&lt;p&gt;I also have an old Dell Vostro screen that uses the exact same driver board, but the cables are different. Also, it may be the case that the boards are flashed with the appropriate firmware for the particular panel. So I would recommend not to gamble and get the driver board that exactly matches the model number.&lt;/p&gt;
&lt;p&gt;To make everything work, we first connect the high-voltage module to the backlight power cable...&lt;/p&gt;
&lt;p&gt;&lt;img alt="hv" src="https://louwrentius.com/images/screen/hv.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;...and we also connect the LCD driver cable:&lt;/p&gt;
&lt;p&gt;&lt;img alt="driver" src="https://louwrentius.com/images/screen/driver.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;When I connected everything for the first time, the display didn't work at all. It turns out that the board shipped with a &lt;em&gt;second&lt;/em&gt;, separate interface cable and I had to swap those cables to make the display work properly.&lt;/p&gt;
&lt;h2&gt;Power supply&lt;/h2&gt;
&lt;p&gt;According to the sales page, the board requires a 12-volt 2A adapter, but in practice, I could get away with a tiny, much weaker power supply.&lt;/p&gt;
&lt;p&gt;&lt;img alt="adapter" src="https://louwrentius.com/images/screen/adapter.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I found an old adapter from a Wi-Fi access-point (long gone) which is rated for just 0.5A. It did power both the board and the screen perfectly. It worked with both the Macbook display as the Dell display so it doesn't seem to be a fluke.&lt;/p&gt;
&lt;p&gt;Although I didn't measure actual power consumption, we know that it can't be more than just 6 Watt because that's what the power adapter can handle.&lt;/p&gt;
&lt;p&gt;Laptop displays need to be power efficient and that may also make them interesting for low-power or battery-powered projects.&lt;/p&gt;
&lt;h2&gt;We've only just begun&lt;/h2&gt;
&lt;p&gt;The display works, but that was the easy part. &lt;/p&gt;
&lt;p&gt;&lt;img alt="display" src="https://louwrentius.com/images/screen/display.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The hardest part of this proces (not pictured) was switching the on-screen-display from Chinese (which I don't master) to English. But there is more work ahead.&lt;/p&gt;
&lt;p&gt;At this point we end up with just a fragile LCD panel connected to a driver board through a bunch of wires. The whole setup is just an unpractical mess. There are at least two things left to do:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Mount the driver board, OSD controller and high-voltage unit to the back of the LCD panel&lt;/li&gt;
&lt;li&gt;Make some kind of stand&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For the old Dell display, I used a bit of wood and hot glue to make a wooden scaffold on which I could mount the driver board with a few screws. It won't win any prizes for sure but it's just an example of what needs to be done make the display (more) manageable.&lt;/p&gt;
&lt;p&gt;&lt;img alt="amateur" src="https://louwrentius.com/images/screen/amateurhour.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;It still doesn't have a stand but that's for another day. I can imagine that if you own a 3D printer, you can make a nice case with a stand, although that will increase the cost of the project.&lt;/p&gt;
&lt;h2&gt;Evaluation&lt;/h2&gt;
&lt;p&gt;What I like most about this kind of project is the fact that for very little money, you can recycle a perfectly fine and usable display that will probably last for another five to ten years. The project takes very little effort and it is also not difficult to do.  &lt;/p&gt;
&lt;p&gt;You can augment existing hobby projects with a screen and due to the relatively low power consumption, it may even be suitable for battery-powered projects.&lt;/p&gt;
&lt;p&gt;And with a bit of work you can make a nice (secondary) monitor out of them. Finally you have an excuse to dust off one of your unused Raspberry Pis that you &lt;em&gt;had to have&lt;/em&gt; but didn't have any actual use for.&lt;/p&gt;
&lt;h2&gt;The thrift-store is cheaper&lt;/h2&gt;
&lt;p&gt;If your goal is just to get a cheap LCD display, it may be cheaper to go to the nearest thrift-store and buy some old second-hand display for $10. But that may have some drawbacks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It will be much larger than the laptop screen&lt;/li&gt;
&lt;li&gt;It is powered by 110/220 volt so less suitable for a battery-powered setup&lt;/li&gt;
&lt;li&gt;overall power consumption will be higher&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So it all depends on your particular needs.&lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;If you also repurposed a laptop monitor for a project or just as a (secondary) screen, feel free to share your work in the comments.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:hn"&gt;
&lt;p&gt;I posted the article on slashdigit to hacker news and it got &lt;a href="https://news.ycombinator.com/item?id=26443025"&gt;quite an interest&lt;/a&gt;.&amp;#160;&lt;a class="footnote-backref" href="#fnref:hn" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Hardware"></category><category term="Hardware"></category></entry><entry><title>Understanding the Ubuntu 20.04 LTS Server Autoinstaller</title><link href="https://louwrentius.com/understanding-the-ubuntu-2004-lts-server-autoinstaller.html" rel="alternate"></link><published>2021-02-11T12:00:00+01:00</published><updated>2021-02-11T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2021-02-11:/understanding-the-ubuntu-2004-lts-server-autoinstaller.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Ubuntu Server version 18.04 LTS uses the debian-installer (d-i) for the installation process. This includes support for 'preseeding' to create unattended (automated) installations of ubuntu. &lt;/p&gt;
&lt;p&gt;&lt;img alt="d-i" src="https://louwrentius.com/static/images/ubuntu/debianinstaller.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;the debian installer&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;With the introduction of Ubuntu Server 20.04 'Focal Fossa' LTS, back in April 2020, Canonical decided that the new …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Ubuntu Server version 18.04 LTS uses the debian-installer (d-i) for the installation process. This includes support for 'preseeding' to create unattended (automated) installations of ubuntu. &lt;/p&gt;
&lt;p&gt;&lt;img alt="d-i" src="https://louwrentius.com/static/images/ubuntu/debianinstaller.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;the debian installer&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;With the introduction of Ubuntu Server 20.04 'Focal Fossa' LTS, back in April 2020, Canonical decided that the new 'subiquity server installer' was ready to take it's place. &lt;/p&gt;
&lt;p&gt;After the new installer gained support for unattended installation, it was considered ready for release. The unattended installer feature is called  &lt;a href="https://discourse.ubuntu.com/t/automated-server-installs/16612"&gt;'Autoinstallation'&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I mostly run Ubuntu 18.04 LTS installations but I decided in February 2021 that I should get more acquainted with 20.04 LTS, especially when I discovered that preseeding would no longer work.&lt;/p&gt;
&lt;p&gt;In this article I assume that the reader is familiar with PXE-based unattended installations.&lt;/p&gt;
&lt;h2&gt;20.04 LTS and 22.04 LTS Can't install on USB drive (Update Aug 2022)&lt;/h2&gt;
&lt;p&gt;I run a few HP Microservers that boot from a SATA SSD in an USB drive enclosure using the internal USB header. Automated installs for Ubuntu 18.04 LTS have no problem installing and booting from a USB device.&lt;/p&gt;
&lt;p&gt;Unfortunately, both Ubuntu 20.04 and 22.04 LTS install fine, but no matter what I do, they won't boot from USB. I've tried different enclosures and most of then won't work. Only one powered USB dock (that is way to big to fit inside the Microserver) does work and let 22.04 boot from USB.&lt;/p&gt;
&lt;p&gt;Again: Ubuntu 18.04 and the Latest Debian work fine so this seems an issue specific to the new Autoinstall mechanism.&lt;/p&gt;
&lt;p&gt;Note: I didn't try this on other hardware (which I don't have) so it might be an issue specific to the HP Microserver Gen8.&lt;/p&gt;
&lt;h2&gt;Why this new installer?&lt;/h2&gt;
&lt;p&gt;Canonical's desire to unify the codebase for Ubuntu Desktop and Server installations seems to be the main driver for this change.&lt;/p&gt;
&lt;p&gt;From my personal perspective, there aren't any new features that benefit my use-cases, but that could be different for others. It's not a ding on the new Autoinstaller, it's just how I look at it. &lt;/p&gt;
&lt;p&gt;There is one conceptual difference between the new installer and preseeding. A preseed file must answer &lt;em&gt;all&lt;/em&gt; questions that the installer needs answered. It will switch to interactive mode if a question is not answered, breaking the unattended installation process. From my experience, there is a bit of trial-and-error getting the preseed configuration right.   &lt;/p&gt;
&lt;p&gt;The new Subiquity installer users defaults for &lt;em&gt;all&lt;/em&gt; installation steps. This means that you can fully automate the installation proces with just a few lines of YAML. You do't need an answer for each step.&lt;/p&gt;
&lt;p&gt;The new installer as other features such as the ability to SSH into an installer session. It works by generating an at-hoc random password on the screen / console which you can use to logon remotely over SSH. I have not used it yet as I never found it necessary.&lt;/p&gt;
&lt;h2&gt;Documentation is a bit fragmented&lt;/h2&gt;
&lt;p&gt;As I was trying to learn more about the new Autoinstaller, I noticed that there isn't a central location with all relevant (links to) documentation. It took a bit of searching and following links, to collect a set of useful information sources, which I share below.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://discourse.ubuntu.com/t/automated-server-install-reference/16613"&gt;Reference Manual&lt;/a&gt; &lt;img width=400/&gt;&lt;/td&gt;
&lt;td&gt;Reference for each particular option of the user-data YAML&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://discourse.ubuntu.com/t/automated-server-installs/16612"&gt;Introduction&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;An overview of the new installer with examples&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://discourse.ubuntu.com/t/automated-server-install-quickstart/16614"&gt;Autoinstall quick start&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Example of booting a VM using the installer using KVM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://discourse.ubuntu.com/t/netbooting-the-live-server-installer/14510"&gt;Netbooting the installer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;A brief instruction on how to setup PXE + TFTP with dnsmasq in order to PXE boot the new installer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://discourse.ubuntu.com/t/please-test-autoinstalls-for-20-04/15250"&gt;Call for testing&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Topic in which people give feedback on their experience with the installer (still active as of February 2021) with a lot of responses&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://askubuntu.com/questions/1235723/automated-20-04-server-installation-using-pxe-and-live-server-image"&gt;Stack Exchange post&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Detailed tutorial with some experiences.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://medium.com/@tlhakhan/ubuntu-server-20-04-autoinstall-2e5f772b655a"&gt;Medium Article&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Contains an example and some experiences.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/CanonicalLtd/subiquity/tree/main/examples"&gt;github examples&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Github repo with about twenty examples of more complex configurations&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The reference documentation only supports some default use-cases for the unattended installation process. You won't be able to build more complex configuration such as RAID-based installations using this reference.&lt;/p&gt;
&lt;p&gt;Under-the-hood, the installer uses &lt;a href="https://curtin.readthedocs.io/en/latest/topics/storage.html"&gt;curtin&lt;/a&gt;. The linked documentation can help you further build complex installations, such as those which use RAID. &lt;/p&gt;
&lt;p&gt;I think the curtin syntax is a bit tedious and fortunately it is probably not required to learn curtis and piece together more complex configurations by hand. 
There is a nice quality-of-life feature that takes care of this. &lt;/p&gt;
&lt;p&gt;More on that later. &lt;/p&gt;
&lt;h2&gt;How does the new installer work?&lt;/h2&gt;
&lt;p&gt;With a regular PXE-based installation, we use the 'netboot' installer, which consists of a Linux kernel and an initrd image (containing the actual installer). &lt;/p&gt;
&lt;p&gt;This package is about 64 Megabytes for Ubuntu 18.04 LTS and it is all you need, assuming that you have already setup a DHCP + TFTP + HTTP environment for a PXE-based installation.&lt;/p&gt;
&lt;p&gt;The new Subiquity installer for Ubuntu 20.04 LTS deprecates this 'netboot' installer. It is not provided anymore. Instead, you have to boot a 'live installer' ISO file which is about 1.1 GB in size. &lt;/p&gt;
&lt;p&gt;The process looks like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Download the Live installer ISO&lt;/li&gt;
&lt;li&gt;Mount the iso to acquire the 'vmlinuz' and 'initrd' files for the TFTP root&lt;/li&gt;
&lt;li&gt;Update your PXE menu (if any) with a stanza like this: &lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;LABEL focal2004-preseed
                MENU LABEL Focal 20.04 LTS x64 Manual Install
                KERNEL linux/ubuntu/focal/vmlinuz
                INITRD linux/ubuntu/focal/initrd
                APPEND root=/dev/ram0 ramdisk_size=1500000 ip=dhcp url=http://10.10.11.1/ubuntu-20.04.1-live-server-amd64.iso
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This process is documented &lt;a href="https://discourse.ubuntu.com/t/netbooting-the-live-server-installer/14510"&gt;here&lt;/a&gt; with detailed how-to steps and commands.&lt;/p&gt;
&lt;p&gt;We have not yet discussed the actual automation part, but first we must address a caveat. &lt;/p&gt;
&lt;h2&gt;The new installer requires 3GB of RAM when PXE-booting&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;UPDATE May 2022:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The 20.04.4 update &lt;strong&gt;won't&lt;/strong&gt; work on a machine with &lt;strong&gt;4096 MB&lt;/strong&gt; of memory when PXE booting. My testing shows that with 20.04 update 04 requires at least 4300MB of memory. &lt;/p&gt;
&lt;p&gt;So if we talk about a physical machine, based on regular memory DIMM sizes, it's likely that the  machine must have 6 to 8 GB of memory. Just to &lt;em&gt;install&lt;/em&gt; Ubuntu Linux over PXE. I think that's not right.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;Although it is not explicitly documented&lt;sup id="fnref:here"&gt;&lt;a class="footnote-ref" href="#fn:here"&gt;1&lt;/a&gt;&lt;/sup&gt;, the new mechanism of PXE-booting the new Ubuntu installer using the Live ISO requires a minimum of 3072 MB of memory. And I assure you, 3000 MB is &lt;em&gt;not&lt;/em&gt; enough.&lt;/p&gt;
&lt;p&gt;It seems that the ISO file is copied into memory over the network and extracted on the RAM disk. With a RAM disk of 1500 MB and an ISO file of 1100 MB we are left with maybe 472 MB of RAM for the running kernel and initrd image. &lt;/p&gt;
&lt;p&gt;To put this into perspective: I could perform an unattended install of Ubuntu 18.04 LTS with only 512 MB of RAM&lt;sup id="fnref:less"&gt;&lt;a class="footnote-ref" href="#fn:less"&gt;2&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;Due to this 'new' installation process, Ubuntu Server 20.04 has much more demanding &lt;em&gt;minimum&lt;/em&gt; system requirements than &lt;a href="https://docs.microsoft.com/en-us/windows-server/get-started-19/sys-reqs-19"&gt;Windows 2019 Server&lt;/a&gt;, which is fine with 'only' 512 MB of RAM, even during installation. I have to admit I find this observation a bit funny.&lt;/p&gt;
&lt;p&gt;It seems that this 3 GB memory requirement for the installation process is purely and &lt;em&gt;solely&lt;/em&gt; because of the new installation process. Obviously Ubuntu 20.04 can run in a smaller memory footprint once installed.&lt;/p&gt;
&lt;p&gt;Under the hood, a tool called 'casper' is used to bootstrap the installation process and that tool only supports a local file system (in this case on a RAM disk). On paper, casper does support installations using NFS or CIFS but that is not supported nor tested. From what I read, some people tried to use it but it didn't work out.&lt;/p&gt;
&lt;p&gt;As I undertand it, the curent status is that you can't install Ubuntu Server 20.04 LTS on any hardware with less than 3GB of memory using PXE-boot. This probably affects older and less potent hardware, but I can remember a time that this was actually part of &lt;em&gt;the point&lt;/em&gt; of running Linux.&lt;/p&gt;
&lt;p&gt;It just feels wrong conceptually, that a PXE-based server installer requires 3GB of memory. &lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The 3GB memory requirement is &lt;strong&gt;only valid for PXE-based installations&lt;/strong&gt;. If you boot from ISO / USB stick you can install Ubuntu Server on a system with less memory. I've verified this with a system with only 1 GB of memory.&lt;/p&gt;
&lt;hr&gt;

&lt;h2&gt;The Autoinstall configuration&lt;/h2&gt;
&lt;p&gt;Now let's go back to the actual automation part of Autoinstaller. If we would want to automate our installation, our PXE menu item should be expanded like this: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;APPEND root=/dev/ram0 ramdisk_size=1500000 ip=dhcp url=http://10.10.11.1/ubuntu-20.04.1-live-server-amd64.iso autoinstall ds=nocloud-net;s=http://10.10.11.1/preseed/cloud-init/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this case, the cloud-init folder - as exposed through an HTTP-server - must contain two files: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;meta-data&lt;/li&gt;
&lt;li&gt;user-data&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The meta-data file contains just one line:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;instance-id: focal-autoinstall
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The user-data file is the equivalent of a preseed file, but it is YAML-based instead of just regular plain text. &lt;/p&gt;
&lt;h3&gt;Minimum working configuration&lt;/h3&gt;
&lt;p&gt;According to the documentation, this is a minimum viable configuration for the user-data file: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c1"&gt;#cloud-config&lt;/span&gt;
&lt;span class="nt"&gt;autoinstall&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;1&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;identity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;ubuntu-server&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;$6$exDY1mhS4KUYCE/2$zmn9ToZwTKLhCw.b4/b.ZRTIZM30JZ4QrOQ2aOXJ8yk96xpcCof0kxKwuX1kqLG/ygbJ1f8wxED22bTL4F46P0&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;ubuntu&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This works fine and performs a basic install + apt upgrade of the new system with a disk configuration based on an LVM layout. &lt;/p&gt;
&lt;h3&gt;My preferred minimum configuration:&lt;/h3&gt;
&lt;p&gt;Personally, I like to keep the unattended installation as simple as possible. I use Ansible to do the actual system configuration, so the unattended installation process only has to setup a minimum viable configuration. &lt;/p&gt;
&lt;p&gt;I like to configure the following parameters: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Inject a SSH public key into the authorized_keys file for Ansible&lt;/li&gt;
&lt;li&gt;Configure the Apt settings to specify which repository to use during installation (I run a local debian/ubuntu mirror)&lt;/li&gt;
&lt;li&gt;Update to the latest packages during installation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I'll give examples of the required YAML to achieve this with the new installer.&lt;/p&gt;
&lt;h3&gt;Injecting a public SSH key for the default user:&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;ssh&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;authorized-keys&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;|&lt;/span&gt;
&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="no"&gt;ssh-rsa &amp;lt;PUB KEY&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;install-server&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;true&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;allow-pw&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;no&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Notice that we also disable password-authentication for SSH access.&lt;/p&gt;
&lt;h3&gt;Configure APT during installation&lt;/h3&gt;
&lt;p&gt;I used this configuration to specify a particular mirror for both the installation process and for the system itself post-installation.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;Mirror&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;mirror&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;http://mirror.mynetwork.loc&amp;quot;&lt;/span&gt;
&lt;span class="nt"&gt;apt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;preserve_sources_list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;false&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;arches&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;amd64&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;http://mirror.mynetwork.loc/ubuntu&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Performing an apt upgrade&lt;/h3&gt;
&lt;p&gt;By default, the installer seems to install security updates but it doesn't install the latest version of softare. This is a deviation from the d-i installer which always ends up with a fully up-to-date system, when done.&lt;/p&gt;
&lt;p&gt;The same end-result can be accomplished by running an apt update and apt upgrade at the end of the installation process.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;late-commands&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;curtin in-target --target=/target -- apt update&lt;/span&gt;&lt;span class="w"&gt;           &lt;/span&gt;
&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;curtin in-target --target=/target -- apt upgrade -y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So all in all, this is not a big deal.&lt;/p&gt;
&lt;h3&gt;Network configuration&lt;/h3&gt;
&lt;p&gt;The network section can be configured using regular Netplan syntax. An example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;2&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;renderer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;networkd&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;ethernets&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;enp0s3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;dhcp4&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;no&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;addresses&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10.10.50.200/24&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;gateway4&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10.10.50.1&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;nameservers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;search&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;mynetwork.loc&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;addresses&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10.10.50.53&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;10.10.51.53&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Storage configuration&lt;/h3&gt;
&lt;p&gt;The installer only supports a 'direct' or 'lvm' layout. It also selects the largest drive in the system as the boot drive, for installation. &lt;/p&gt;
&lt;p&gt;If you want to setup anything more complex as you may want to setup RAID or a specific partion layout, you need to use the &lt;a href="https://curtin.readthedocs.io/en/latest/topics/storage.html"&gt;curtin&lt;/a&gt; syntax.&lt;/p&gt;
&lt;p&gt;It is not immediately clear how to setup a RAID configuration based on the available documentation. &lt;/p&gt;
&lt;p&gt;Fortunately, the new installer supports creating a RAID configuration or custom partition layout if you perform a &lt;em&gt;manual&lt;/em&gt; install. &lt;/p&gt;
&lt;p&gt;It turns out that when the manual installation is done you can find the cloud-init user-data YAML for this particular configuration in the following file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;    /var/log/installer/autoinstall-user-data
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I think this is &lt;em&gt;extremely convenient&lt;/em&gt;. &lt;/p&gt;
&lt;p&gt;So to build a proper RAID1-based installation, I followed &lt;a href="https://gist.io/@fevangelou/2f7aa0d9b5cb42d783302727665bf80a"&gt;these instructions&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So what does the YAML for this RAID setup look like? &lt;/p&gt;
&lt;p&gt;This is the storage section of my user-data file (brace yourself): &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nt"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;ptable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;gpt&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; serial&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;VBOX_HARDDISK_VB50546281-4e4a6c24&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;/dev/sda&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;true&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sda&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;ptable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;gpt&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; serial&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;VBOX_HARDDISK_VB84e5a275-89a2a956&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;/dev/sdb&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;true&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sdb&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sda&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;1048576&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; flag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;bios_grub&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-0&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sdb&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;1048576&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; flag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;bios_grub&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-1&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sda&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;524288000&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; wipe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;superblock&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; flag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-2&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sdb&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;524288000&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; wipe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;superblock&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; flag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-3&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sda&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;1073741824&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; wipe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;superblock&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; flag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;3&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-4&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sdb&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;1073741824&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; wipe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;superblock&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; flag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;3&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-5&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sda&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;9136242688&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; wipe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;superblock&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; flag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;4&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-6&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;disk-sdb&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;9136242688&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; wipe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;superblock&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; flag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;4&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;
&lt;span class="nt"&gt;      preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; grub_device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-7&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;md0&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;raidlevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;raid1&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;devices&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;partition-2&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-3&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;spare_devices&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;false&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;raid&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;raid-0&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;md1&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;raidlevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;raid1&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;devices&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;partition-4&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-5&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;spare_devices&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;false&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;raid&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;raid-1&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;md2&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;raidlevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;raid1&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;devices&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;partition-6&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;partition-7&lt;/span&gt;&lt;span class="p p-Indicator"&gt;]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;spare_devices&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;[]&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;false&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;raid&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="nt"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="l l-Scalar l-Scalar-Plain"&gt;raid-2&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;fstype&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ext4&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; volume&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;raid-0&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;format-0&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;fstype&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;swap&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; volume&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;raid-1&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;format-1&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;format-1&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;#39;&amp;#39;&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;mount&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;mount-1&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;fstype&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ext4&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; volume&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;raid-2&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; preserve&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;false&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;format-2&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;format-2&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;/&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;mount&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;mount-2&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p p-Indicator"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p p-Indicator"&gt;{&lt;/span&gt;&lt;span class="nt"&gt;device&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;format-0&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;/boot&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;mount&lt;/span&gt;&lt;span class="p p-Indicator"&gt;,&lt;/span&gt;&lt;span class="nt"&gt; id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;mount-0&lt;/span&gt;&lt;span class="p p-Indicator"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That's quite a long list of instructions to 'just' setup a RAID1. Although I understand all the steps involved, I'd never have come up with this by myself on short notice using just the documentation, so I think that the &lt;em&gt;autoinstall-user-data&lt;/em&gt; file is a life saver.&lt;/p&gt;
&lt;p&gt;After the manual installation, creating a RAID1 mirror, I copied the configuration above into my own custom user-data YAML. Then I performed an unattended installation and it worked on the first try.&lt;/p&gt;
&lt;p&gt;So if you want to add LVM into the mix or make some other complex storage configuration, the easiest way to automate it, is to first do a &lt;em&gt;manual install&lt;/em&gt; and then copy the relevant storage section from the autoinstall-user-data file to your custom user-data file.&lt;/p&gt;
&lt;h3&gt;Example user-data file for download&lt;/h3&gt;
&lt;p&gt;I've published a working user-data file that creates a RAID1 &lt;a href="https://louwrentius.com/files/user-data"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You can try it out by creating a virtual machine with two (virtual) hard drives. I'm assuming that you have a PXE boot environment setup.&lt;/p&gt;
&lt;p&gt;Obviously, you'll have to change the network settings for it to work.&lt;/p&gt;
&lt;h3&gt;Generating user passwors&lt;/h3&gt;
&lt;p&gt;If you do want to logon onto the console with the default user, you must generate a salt+password hash and copy/past that into the user-data file.&lt;/p&gt;
&lt;p&gt;I juse the 'mkpasswd' command for this like so: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mkpasswd -m sha-512
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The mkpasswd utility is part of the 'whois' package.&lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;For those who are still on Ubuntu Server 18.04 LTS, there is no need for immediate action as this version is supported until 2023. Only support for new hardware has come to an end for the 18.04 LTS release.&lt;/p&gt;
&lt;p&gt;At some point, some time and effort will be required to migrate towards the new Autoinstall solution. Maybe this blogpost helps you with this transition.&lt;/p&gt;
&lt;p&gt;It took me a few evenings to master the new user-data solution, but the fact that a manual installation basically results in a perfect pre-baked user-data file&lt;sup id="fnref:file"&gt;&lt;a class="footnote-ref" href="#fn:file"&gt;3&lt;/a&gt;&lt;/sup&gt; is a tremendous help.&lt;/p&gt;
&lt;p&gt;I think I just miss the point of all this effort of revamping installers but maybe I'm not hurt by the limitations of the older existing solution. If you have any thoughts on this, feel free to let me know in the comments.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:here"&gt;
&lt;p&gt;I had to discover this information on StackExchange somewhere down below in &lt;a href="https://askubuntu.com/questions/1235723/automated-20-04-server-installation-using-pxe-and-live-server-image"&gt;this very good article&lt;/a&gt; after experiencing problems.&amp;#160;&lt;a class="footnote-backref" href="#fnref:here" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:less"&gt;
&lt;p&gt;I tried 384 MB and it didn't finish, just got stuck.&amp;#160;&lt;a class="footnote-backref" href="#fnref:less" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:file"&gt;
&lt;p&gt;/var/log/installer/autoinstall-user-data&amp;#160;&lt;a class="footnote-backref" href="#fnref:file" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Linux"></category><category term="Linux"></category></entry><entry><title>A 12.48 inch (1304x984) three-color e-paper display by Waveshare</title><link href="https://louwrentius.com/a-1248-inch-1304x984-three-color-e-paper-display-by-waveshare.html" rel="alternate"></link><published>2020-12-22T12:00:00+01:00</published><updated>2020-12-22T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-12-22:/a-1248-inch-1304x984-three-color-e-paper-display-by-waveshare.html</id><summary type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update September 2023:&lt;/strong&gt; A small horizontal and large vertical row of pixels died. Demo of defective display &lt;a href="https://www.youtube.com/watch?v=sN50r1HIcCI"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;I'm running a &lt;a href="https://louwrentius.com/this-blog-is-now-running-on-solar-power.html"&gt;solar-powered&lt;/a&gt; blog and I wanted to add a low-power display to show the daily solar 'harvest'&lt;sup id="fnref:solarwinter"&gt;&lt;a class="footnote-ref" href="#fn:solarwinter"&gt;1&lt;/a&gt;&lt;/sup&gt; and maybe some additional information. &lt;/p&gt;
&lt;p&gt;So I decided to use an …&lt;/p&gt;</summary><content type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update September 2023:&lt;/strong&gt; A small horizontal and large vertical row of pixels died. Demo of defective display &lt;a href="https://www.youtube.com/watch?v=sN50r1HIcCI"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;I'm running a &lt;a href="https://louwrentius.com/this-blog-is-now-running-on-solar-power.html"&gt;solar-powered&lt;/a&gt; blog and I wanted to add a low-power display to show the daily solar 'harvest'&lt;sup id="fnref:solarwinter"&gt;&lt;a class="footnote-ref" href="#fn:solarwinter"&gt;1&lt;/a&gt;&lt;/sup&gt; and maybe some additional information. &lt;/p&gt;
&lt;p&gt;So I decided to use an &lt;em&gt;e-paper display&lt;/em&gt;.  I wanted a display that would be readable from a distance, so bigger would be better. I therefore chose the &lt;a href="https://www.waveshare.com/product/displays/e-paper/epaper-1/12.48inch-e-paper-module-b.htm"&gt;Waveshare 12.48 inch e-paper&lt;/a&gt; display&lt;sup id="fnref:bigger"&gt;&lt;a class="footnote-ref" href="#fn:bigger"&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/epaper/epaper01_large.jpg"&gt;&lt;img alt="&amp;quot;e-paper display&amp;quot;" src="https://louwrentius.com/static/images/epaper/epaper01_small.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;an example based on data from the summer, generated with Graphite&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This particular display costs $179 excluding taxes and shipping at the time this article was written.&lt;/p&gt;
&lt;h3&gt;Specifications&lt;/h3&gt;
&lt;p&gt;Waveshare sells a &lt;a href="https://www.youtube.com/watch?v=euETPGl2iYo"&gt;two-color&lt;/a&gt; (black and white) and a three-color version (black, white, red). I bought the three-color version. The three-color version is the (B) model.&lt;/p&gt;
&lt;p&gt;Specifications:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Screen size     :      12.48 inches
Resolution      :      1304 x 984
Colors          :      black, white, red
Greyscale       :      2 levels
Refresh rate    :      16 seconds
Partial refresh :      Not supported
Interfaces      :      Raspberry Pi, ESP32, STM32, Arduino
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The two-color variant  of this display has a refresh rate of 8 seconds.&lt;/p&gt;
&lt;p&gt;This display is clearly quite slow. Furthermore, the lack of partial refresh support could make this display unsuitable for some applications. I was OK with this slow refresh rate. &lt;/p&gt;
&lt;p&gt;The image below demonstrates different fonts and sizes. I think &lt;em&gt;DejaVuSansMono-Bold&lt;/em&gt; looks really well on the display, better than the font supplied by Waveshare.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/epaper/epaper02_large.jpg"&gt;&lt;img alt="&amp;quot;e-paper display&amp;quot;" src="https://louwrentius.com/static/images/epaper/epaper02_small.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;The Interfaces&lt;/h3&gt;
&lt;p&gt;The display includes a microcontroller that in turn can be driven through one of four interfaces:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A Raspberry Pi (Worked)&lt;/li&gt;
&lt;li&gt;An ESP32  (Not tested)&lt;/li&gt;
&lt;li&gt;An Arduino (Didn't work)&lt;/li&gt;
&lt;li&gt;An STM32   (Not tested)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I've tried the Arduino header with an Arduino Uno, but the supplied demo code didn't work. I did not investigate further why this was the case. It could be a problem with voltage regulation.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/epaper/epaper03_large.jpg"&gt;&lt;img alt="&amp;quot;e-paper display&amp;quot;" src="https://louwrentius.com/static/images/epaper/epaper03_small.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In the image above, the black plastic backplate is removed. &lt;/p&gt;
&lt;h3&gt;Image quality&lt;/h3&gt;
&lt;p&gt;These e-paper displays are mostly sold as product information displays for supermarkets and other businesses. However, the quality is good enough to display images. Especially the support for red can make an image stand out.&lt;/p&gt;
&lt;p&gt;Below is an example of an image that incorporates the third (red) color.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/epaper/epaper04_large.jpg"&gt;&lt;img alt="&amp;quot;e-paper display&amp;quot;" src="https://louwrentius.com/static/images/epaper/epaper04_small.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The display seems suitable to display art.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/epaper/epaper05_large.jpg"&gt;&lt;img alt="&amp;quot;e-paper display&amp;quot;" src="https://louwrentius.com/static/images/epaper/epaper05_small.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It looks quite good in real life (sorry for the glare). &lt;/p&gt;
&lt;h3&gt;How the display shows three colors&lt;/h3&gt;
&lt;p&gt;The display acts like it is actually &lt;em&gt;two&lt;/em&gt; displays in one. A black and white display, and a red and white display.&lt;/p&gt;
&lt;p&gt;First, the black and white image is drawn. Next, the red and white image is put on top. &lt;/p&gt;
&lt;p&gt;Because the display has to draw two images in succession, it takes 16 seconds to refresh the screen. This explains why the black-and-white version of this screen does a refresh in eight seconds: it doesn't have to refresh the red color. &lt;/p&gt;
&lt;p&gt;Please note that the entire process of displaying content on the screen takes much longer. &lt;/p&gt;
&lt;p&gt;A demonstration:&lt;/p&gt;
&lt;iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/nwa_MTNUDJU" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;

&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;

&lt;h3&gt;Displaying an image is cumbersome (On Raspberry Pi 3B+)&lt;/h3&gt;
&lt;p&gt;At the time this article was written, I could not find any information or tools for this display&lt;sup id="fnref:google"&gt;&lt;a class="footnote-ref" href="#fn:google"&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Many Waveshare e-paper displays are popular and have a decent community support. However, it seems that this display is rather unknown.&lt;/p&gt;
&lt;p&gt;Therefore, it seems that there are no tools available to display an arbitrary image on this display. You can use the example Python code to display an image but you have to follow these steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a black-and-white version of the image&lt;/li&gt;
&lt;li&gt;Create a red-and-white version of the image, that contains only data for the red parts of the image&lt;/li&gt;
&lt;li&gt;If the source image doesn't match the required resolution, you have to resize, crop and fill the image where appropriate.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Both 'black' and 'red' images need to exactly match the resolution of the display (1304x984) or the library will abort with an error. &lt;/p&gt;
&lt;p&gt;As I found this proces tedious, I automated it.&lt;/p&gt;
&lt;h3&gt;A new tool to make displaying an image easy&lt;/h3&gt;
&lt;p&gt;I've used the python library as supplied by Waveshare and created a &lt;a href="https://github.com/louwrentius/waveshare-12.48inch-3-color-e-ink"&gt;command-line tool&lt;/a&gt; (Github) on top of it to perform all the required steps as described in the previous section. I'm using Imagemagick for all the image processing. &lt;/p&gt;
&lt;p&gt;The script works like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;./display -i &amp;lt;image file&amp;gt; [--rotate 90] [--fuzz 35] [--color yellow]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The --fuzz and --color parameters may require some clarification. &lt;/p&gt;
&lt;p&gt;The color red is extracted from an image but it's not always perfect. By applying the --fuzz parameter (the argument is a percentage), it is possible to capture more of the red (or selected color) of an image. &lt;/p&gt;
&lt;p&gt;The --color option specifies which color should be 'converted' to red. By default this color is 'red' (obviously). The 'solar chart' (at the start of this article) is an example where a yellow line was converted to red.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Very slow&lt;/strong&gt;: it takes about &lt;em&gt;55 seconds&lt;/em&gt; to display an image using the Raspberry Pi 3B+. Half of that minute is spend converting the images to the appropriate format using Imagemagick.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Informational&lt;/strong&gt;: The Python program includes a &lt;em&gt;modified version&lt;/em&gt; of the Waveshare Python library. This library has been altered to &lt;em&gt;prevent&lt;/em&gt; double conversion of images, which significantly degrades image quality.&lt;/p&gt;
&lt;h3&gt;Slow performance&lt;/h3&gt;
&lt;p&gt;If you use the provided Python library (Python3 compatible) it takes about &lt;strong&gt;30+ seconds&lt;/strong&gt; to draw an image on the screen. (This excludes the image processing performed with the 'display' tool.)&lt;/p&gt;
&lt;p&gt;Further testing showed that the Python library converts and dithers the image before it is sent to the display. And it does so for both black and red. Dithering is performed by looping in Python over every of the 1.3 milion pixels. &lt;/p&gt;
&lt;p&gt;Each of these loops (for black and red) take about 10 seconds on the Raspberry Pi 3B+, which explains why it takes so long to update the display. Therefore, I think the combination of Python + the Raspberry Pi 3B+ is not ideal in this case.&lt;/p&gt;
&lt;h3&gt;Evaluation&lt;/h3&gt;
&lt;p&gt;I wanted to share my experience with this display to make other people aware of its existence. The tool I created should make it simple to get up and running and display an image. &lt;/p&gt;
&lt;p&gt;It clearly has some drawbacks but due to the size, resolution and third color, it seems to be unique and may therefore be interesting.&lt;/p&gt;
&lt;p&gt;Although I never tried the display with an ESP32, I think its an ideal for the purpose of a low-power picture frame. &lt;/p&gt;
&lt;p&gt;This article was discussed on &lt;a href="https://news.ycombinator.com/item?id=25522966"&gt;Hacker News&lt;/a&gt; (briefly).
This resulted in about 9000 unique visitors for this article.&lt;/p&gt;
&lt;h3&gt;Appendix A - Remark on other displays&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Please note:&lt;/strong&gt; Waveshare also sells a smaller &lt;a href="https://www.waveshare.com/10.3inch-e-Paper-HAT.htm"&gt;10.3 inch black-and-white e-paper display&lt;/a&gt; for a similar price with some significant benefits: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Screen size     :      10.3 inches
Resolution      :      1872 x 1404
Colors          :      black and white
Greyscale       :      16 levels
Refresh rate    :      450 miliseconds
Partial refresh :      Supported
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This particular display is smaller but has a higher resolution, supports 16 grayscale levels and updates in half a second. This display may better suit your particular needs. For example, I believe that this display may have been used in &lt;a href="https://www.jamez.it/blog/2020/12/17/made-epaper-solar-powered-digital-photo-frame-call-solarpunk/"&gt;this project&lt;/a&gt;, a solar-powered digital photo frame.&lt;/p&gt;
&lt;h3&gt;Appendix B - How to make the display work on a Raspberry Pi&lt;/h3&gt;
&lt;p&gt;This information is straight from the Waveshare site but I include it for completeness and ease of use. &lt;/p&gt;
&lt;p&gt;PART I: Enable the SPI interface with raspi-config&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;sudo raspi-config&lt;/li&gt;
&lt;li&gt;select Interfacing Options &lt;/li&gt;
&lt;li&gt;Select SPI&lt;/li&gt;
&lt;li&gt;Select Yes&lt;/li&gt;
&lt;li&gt;Reboot the Raspberry Pi&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;PART II: Install the required libraries&lt;/p&gt;
&lt;p&gt;Install BCM283&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.airspayce.com/mikem/bcm2835/"&gt;Website&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;wget http://www.airspayce.com/mikem/bcm2835/bcm2835-1.60.tar.gz
tar zxvf bcm2835-1.60.tar.gz 
cd bcm2835-1.60/
sudo ./configure
sudo make
sudo make check
sudo make install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Install wiringPi&lt;/p&gt;
&lt;p&gt;&lt;a href="http://wiringpi.com/wiringpi-deprecated/"&gt;website&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo apt-get install wiringpi
cd /tmp
wget https://project-downloads.drogon.net/wiringpi-latest.deb
sudo dpkg -i wiringpi-latest.deb
gpio -v
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Caution&lt;/strong&gt;: The library seems to be deprecated. The Raspberry Pi 4 is supported but future versions of the Raspberry Pi may not.&lt;/p&gt;
&lt;p&gt;The wiringPi library is used as part of a compiled library called "DEV_Config.so" as found in the ./lib directory.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;@raspberrypi:~/epaper_display $ ldd lib/DEV_Config.so 
    linux-vdso.so.1 (0x7ee0d000)
    /usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so =&amp;gt; /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so (0x76f1e000)
    libwiringPi.so =&amp;gt; /usr/lib/libwiringPi.so (0x76f00000)
    libm.so.6 =&amp;gt; /lib/arm-linux-gnueabihf/libm.so.6 (0x76e7e000)
    libc.so.6 =&amp;gt; /lib/arm-linux-gnueabihf/libc.so.6 (0x76d30000)
    libpthread.so.0 =&amp;gt; /lib/arm-linux-gnueabihf/libpthread.so.0 (0x76d06000)
    librt.so.1 =&amp;gt; /lib/arm-linux-gnueabihf/librt.so.1 (0x76cef000)
    libcrypt.so.1 =&amp;gt; /lib/arm-linux-gnueabihf/libcrypt.so.1 (0x76caf000)
    /lib/ld-linux-armhf.so.3 (0x76f46000)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Install Python3 and required libraries&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install python3-pip
sudo apt-get install python3-pil
sudo pip3 install RPi.GPIO
sudo pip3 install spidev
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Apendix C - E-paper hacking&lt;/h3&gt;
&lt;p&gt;I see myself as a consumer and I don't have any desire to hack the display for lower refresh-rates or partial refresh support, with the risk of damaging the display in the process. &lt;/p&gt;
&lt;p&gt;However one resource about this topic I find very informative is a video from the Youtube Channel "Applied Science" (By Ben Krasnow), called &lt;a href="https://www.youtube.com/watch?v=MsbiO8EAsGw&amp;amp;t=1436s"&gt;"E-paper hacking: fastest possible refresh rate".&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Apendix D - Available libraries&lt;/h3&gt;
&lt;p&gt;Example code for all supported platforms can be found in this &lt;a href="https://github.com/waveshare/12.48inch-e-paper"&gt;github location&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I also found &lt;a href="https://github.com/ZinggJM/GxEPD2"&gt;this github repository&lt;/a&gt; that may support this display. This code (also) didn't work for me on my Arduino uno. This could be due to a voltage mismatch, but I'm not willing to solder and potentially destroy the display.&lt;/p&gt;
&lt;h3&gt;Apendix E - Links to other e-paper projects&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://onezero.medium.com/the-morning-paper-revisited-35b407822494"&gt;Very large and expensive display&lt;/a&gt; (Medium paywall)&lt;/p&gt;
&lt;p&gt;&lt;a href="https://brettcvz.com/projects/6-upnext"&gt;E-paper calendar&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.jamez.it/blog/2020/12/17/made-epaper-solar-powered-digital-photo-frame-call-solarpunk/"&gt;Solar-powered digital photo frame&lt;/a&gt;&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:solarwinter"&gt;
&lt;p&gt;During fall and winter, there is almost no power generation due to the very sub-optimal location of my balcony.&amp;#160;&lt;a class="footnote-backref" href="#fnref:solarwinter" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:bigger"&gt;
&lt;p&gt;You can &lt;a href="https://onezero.medium.com/the-morning-paper-revisited-35b407822494"&gt;go larger&lt;/a&gt; but &lt;a href="https://shopkits.eink.com/product/31-2˝-monochrome-epaper-display-va3200-qaa/"&gt;at a cost&lt;/a&gt;.&amp;#160;&lt;a class="footnote-backref" href="#fnref:bigger" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:google"&gt;
&lt;p&gt;My google skills may be at fault but that point is now moot.&amp;#160;&lt;a class="footnote-backref" href="#fnref:google" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Hardware"></category><category term="Hardware"></category></entry><entry><title>Most Technical debt is just bullshit</title><link href="https://louwrentius.com/most-technical-debt-is-just-bullshit.html" rel="alternate"></link><published>2020-09-25T12:00:00+02:00</published><updated>2020-09-25T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-09-25:/most-technical-debt-is-just-bullshit.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I made an offhand remark about technical debt to a friend and he interrupted me, saying: "technical debt is just bullshit". In his experience, people talking about technical debt were mostly trying to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;cover up bad code &lt;/li&gt;
&lt;li&gt;cover up unfinished work&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="http://geek-and-poke.com/geekandpoke/2017/6/3/parenting-geeks"&gt;&lt;img alt="mess" src="https://louwrentius.com/static/images/technicaldebt.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://geek-and-poke.com/geekandpoke/2017/6/3/parenting-geeks"&gt;&lt;em&gt;source&lt;/em&gt;&lt;/a&gt;&lt;sup id="fnref:discovered"&gt;&lt;a class="footnote-ref" href="#fn:discovered"&gt;1&lt;/a&gt;&lt;/sup&gt; &lt;/p&gt;
&lt;p&gt;Calling these issues 'technical debt' seems …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I made an offhand remark about technical debt to a friend and he interrupted me, saying: "technical debt is just bullshit". In his experience, people talking about technical debt were mostly trying to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;cover up bad code &lt;/li&gt;
&lt;li&gt;cover up unfinished work&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="http://geek-and-poke.com/geekandpoke/2017/6/3/parenting-geeks"&gt;&lt;img alt="mess" src="https://louwrentius.com/static/images/technicaldebt.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://geek-and-poke.com/geekandpoke/2017/6/3/parenting-geeks"&gt;&lt;em&gt;source&lt;/em&gt;&lt;/a&gt;&lt;sup id="fnref:discovered"&gt;&lt;a class="footnote-ref" href="#fn:discovered"&gt;1&lt;/a&gt;&lt;/sup&gt; &lt;/p&gt;
&lt;p&gt;Calling these issues 'technical debt' seems to be a tactic of distancing oneself  from these problems. A nice way of avoiding responsibility. To sweep things under the rug.&lt;/p&gt;
&lt;p&gt;Intrigued, I decided to take a better look at the metaphor of techical debt, to better understand what is actually meant.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Tip: &lt;a href="https://medium.com/swlh/technical-debt-isnt-real-8803ce021caa]"&gt;this article&lt;/a&gt; on Medium by David Vandegrift also tackles this topic.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;A definition of technical debt&lt;/h2&gt;
&lt;p&gt;Right off the bat, I realised that my own understanding of technical debt was wrong. Most people seem to understand technical debt as: &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;"cut a corner now, to capture short-term business value (taking on debt), and clean up later (repaying the debt)".&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I think &lt;em&gt;that's wrong.&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;&lt;a href="http://wiki.c2.com/?WardExplainsDebtMetaphor"&gt;Ward Cunningham&lt;/a&gt;, who coined the metaphor of technical debt, wrote:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You know, if you want to be able to go into debt that way by developing software that you don't completely understand, you are wise to make that software reflect your understanding as best as you can, so that when it does come time to refactor, it's clear what you were thinking when you wrote it, making it easier to refactor it into what your current thinking is now.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In some sense, this reads to me as a form of prototyping. To try out and test design/architecture to see if it fits the problem space at hand. But it also incorporates the willingness to spend extra time in the future to change the code to better reflect the current understanding of the problem at hand.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;... if we failed to make our program align with what we then understood to be the proper way to think about our financial objects, then we were gonna continually stumble over that disagreement and that would slow us down which was like paying interest on a loan.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The misalignment of the design/architecture and the problem domain creates a  &lt;strong&gt;bottleneck&lt;/strong&gt;, slowing down future development. &lt;/p&gt;
&lt;p&gt;So I think it's clearly &lt;em&gt;not&lt;/em&gt; about taking &lt;em&gt;shortcuts&lt;/em&gt; for a &lt;em&gt;short-term business gain&lt;/em&gt;. &lt;/p&gt;
&lt;p&gt;It is more a constant reinvestment in the future. It may temporary halt feature work, but it should result in more functionality and features in the long run. It doesn't seem short-term focussed at all to me. And you need to write 'clean' code and do your best because it is likely that you will have to rewrite parts of it. &lt;/p&gt;
&lt;p&gt;These two articles by Ron Jeffries already discuss this in great detail.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ronjeffries.com/articles/019-01ff/tech-debt-from-twitter/"&gt;Technical Debt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ronjeffries.com/articles/019-01ff/tech-debt/"&gt;Technical Debt? Probably not.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;A logical error&lt;/h2&gt;
&lt;p&gt;Reading up on the topic, I noticed something very peculiar. Somehow along the way, &lt;em&gt;everything&lt;/em&gt; that hinders software development has become 'technical debt'. &lt;/p&gt;
&lt;p&gt;Anything that creates a &lt;em&gt;bottleneck&lt;/em&gt;, is suddenly put into the basket of technical debt. I started to get a strong sense that a lot of people are somehow making a &lt;a href="https://en.wikipedia.org/wiki/Affirming_the_consequent"&gt;&lt;em&gt;logical fallacy&lt;/em&gt;&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;If you have technical debt, you'll experience friction when trying to ignore it and just plow ahead. The technical debt creates a &lt;em&gt;bottleneck&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;But then people reason the wrong way around: &lt;em&gt;I notice a bottleneck in my software development process, so we have 'technical debt'&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;However, because technical debt creates a bottleneck, it &lt;em&gt;doesn't&lt;/em&gt; follow that every bottleneck is thus technical debt.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I think it's this flawed reasoning that turns every perceived obstacle into technical debt&lt;sup id="fnref:feature"&gt;&lt;a class="footnote-ref" href="#fn:feature"&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Maybe I'm creating a straw man argument, but I think I have some examples that show that people are thinking the wrong way around.&lt;/p&gt;
&lt;p&gt;If we look at the &lt;a href="https://en.wikipedia.org/wiki/Technical_debt#Causes"&gt;wikipedia page&lt;/a&gt; about technical debt, there is a long list of possible causes of technical debt.&lt;/p&gt;
&lt;p&gt;To site &lt;em&gt;some&lt;/em&gt; examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Insufficient up-front definition&lt;/li&gt;
&lt;li&gt;Lack of clear requirements before the start of development&lt;/li&gt;
&lt;li&gt;Lack of documentation&lt;/li&gt;
&lt;li&gt;Lack of a test suite&lt;/li&gt;
&lt;li&gt;Lack of collaboration / knowledge sharing&lt;/li&gt;
&lt;li&gt;Lack of knowledge/skills resulting in bad or suboptimal code &lt;/li&gt;
&lt;li&gt;Poor technical leadership&lt;/li&gt;
&lt;li&gt;Last minute specification changes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Notice that these issues are called 'technical debt' because they can have a similar &lt;em&gt;outcome&lt;/em&gt; as technical debt. They can create a &lt;em&gt;bottleneck&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;But why the hell would we call these issues technical debt? &lt;/p&gt;
&lt;p&gt;These issues are self-explanatory. Calling them technical debt not only seems inappropriate, it just &lt;em&gt;obfuscates&lt;/em&gt; the cause of these problems and it doesn't provide any new &lt;em&gt;insight&lt;/em&gt;. Even in conversations with laypeople.&lt;/p&gt;
&lt;h2&gt;A mess is not a Technical Debt&lt;/h2&gt;
&lt;p&gt;A &lt;a href="https://sites.google.com/site/unclebobconsultingllc/a-mess-is-not-a-technical-debt"&gt;blogpost by Uncle Bob&lt;/a&gt; with the same title&lt;sup id="fnref:wrong"&gt;&lt;a class="footnote-ref" href="#fn:wrong"&gt;3&lt;/a&gt;&lt;/sup&gt; also hits on this issue that a lot of issues are incorrectly labeled as 'technical debt'.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Unfortunately there is another situation that is sometimes called “technical debt” but that is neither reasoned nor wise. A mess.&lt;/p&gt;
&lt;p&gt;...&lt;/p&gt;
&lt;p&gt;A mess is not a technical debt. A mess is just a mess. Technical debt decisions are made based on real project constraints. They are risky, but they can be beneficial. The decision to make a mess is never rational, is always based on laziness and unprofessionalism, and has no chance of paying of in the future. A mess is always a loss.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Cunningham's definition of technical debt shows that it's a very conscious and deliberate process. Creating a mess isn't. It's totally inappropriate to call that technical debt. It's just a mess.&lt;/p&gt;
&lt;p&gt;I think that nicely relates back to that earlier list from wikipedia. Just call things out for what they actually are.&lt;/p&gt;
&lt;h2&gt;Is quibbling over 'technical debt' as a metaphor missing the point?&lt;/h2&gt;
&lt;p&gt;In &lt;a href="https://martinfowler.com/bliki/TechnicalDebtQuadrant.html"&gt;this blogpost&lt;/a&gt;, Martin Fowler addresses the blogpost by &lt;a href="https://sites.google.com/site/unclebobconsultingllc/a-mess-is-not-a-technical-debt"&gt;Uncle Bob&lt;/a&gt; and argues that technical debt as a metaphor is (still) very valuable when communicating with non-technical people. &lt;/p&gt;
&lt;p&gt;He even introduces a quadrant:&lt;/p&gt;
&lt;table borders&gt;
&lt;tr&gt;&lt;th&gt;&lt;/th&gt;&lt;th&gt;Reckless&lt;/th&gt;&lt;th&gt;Prudent&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;th&gt;Deliberate&lt;/th&gt;&lt;td&gt;"We don't have time for design"&lt;/td&gt;&lt;td&gt;"We must ship now and deal with consequences (later)"&lt;/td&gt;
&lt;tr&gt;&lt;th&gt;inadvertent&lt;/th&gt;&lt;td&gt;"What's Layering?"&lt;/td&gt;&lt;td&gt;"Now we know how we should have done it"&lt;/td&gt;
&lt;/table&gt;

&lt;p&gt;This quadrant makes me extremely suspicious. Because in this quadrant, &lt;em&gt;everything&lt;/em&gt; is technical debt. He just invents different flavours of technical debt. It's never &lt;em&gt;not&lt;/em&gt; technical debt. It's technical debt all the way down.&lt;/p&gt;
&lt;p&gt;It seems to me that Martin Fowler twists the metaphor of technical debt into something that can never be falsified, like psychoanalysis. &lt;/p&gt;
&lt;p&gt;It's not 'bad code', a 'design flaw' or 'a mess', it's 'inadvertent &amp;amp; reckless technical debt'. What is really more descriptive of the problem?&lt;/p&gt;
&lt;p&gt;Maybe it's just my lack of understanding, but I fail to see why it is in any way helpful to call every kind of bottleneck 'technical debt'. I again fail to see how this conveys any meaning. &lt;/p&gt;
&lt;p&gt;In the end, what Fowler does is just pointing out that bottlenecks in software development can be due to the &lt;a href="https://en.wikipedia.org/wiki/Four_stages_of_competence"&gt;four stages of competence&lt;/a&gt;.&lt;/p&gt;
&lt;table borders&gt;
&lt;tr&gt;&lt;th&gt;&lt;/th&gt;&lt;th&gt;Incompetence&lt;/th&gt;&lt;th&gt;Competence&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;th&gt;Concious&lt;/th&gt;&lt;td&gt;"We don't have time for design"&lt;/td&gt;&lt;td&gt;"We must ship now and deal with consequences (later)"&lt;/td&gt;
&lt;tr&gt;&lt;th&gt;Unconscious&lt;/th&gt;&lt;td&gt;"What's Layering?"&lt;/td&gt;&lt;td&gt;"Now we know how we should have done it"&lt;/td&gt;
&lt;/table&gt;

&lt;p&gt;I don't think we need new metaphors for things we (even laypeople) already understand. &lt;/p&gt;
&lt;h2&gt;Does technical debt (even) exists?&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://thehftguy.com/2020/08/26/technical-debt-doesnt-exist/"&gt;The HFT Guy&lt;/a&gt; goes as far as to argue that technical debt doesn't really exists, it isn't a 'real' concept. &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;After decades of software engineering, I came to the professional conclusion that technical debt doesn’t exist.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;His argument boils down to the idea that what people call technical debt is actually mostly &lt;em&gt;maintenance&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;So reincorporating a better understanding of the problem at hand into the code (design) is seen as an &lt;em&gt;integral and natural part of software development&lt;/em&gt;, illustrated by the substitute metaphor of mining (alternating between digging and reinforcing). At least that's how I understand it.&lt;/p&gt;
&lt;p&gt;Substituting one metaphor with another, how useful is that really? But in this case it's at least less generic and more precise. &lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;Although Cunningham meant well, I think the metaphor of technical debt started to take on a life of its own. To a point where code that doesn't conform to some Platonic ideal is called technical debt&lt;sup id="fnref:seef"&gt;&lt;a class="footnote-ref" href="#fn:seef"&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Every mistake, every changing requirement, every tradeoff that becomes a bottleneck within the development process is labeled 'technical debt'. I don't think that this is constructive.&lt;/p&gt;
&lt;p&gt;I think my friend was right: the concept of technical debt has become bullshit.  It doesn't convey any better insight or meaning. On the contrary, it seems to obfuscate the true cause of a bottleneck. &lt;/p&gt;
&lt;p&gt;At this point, when people talk about technical debt, I would be very sceptical and would want more details. Technical debt doesn't actually explain why we are where we are. It has become a hollow, hand-wavy 'explanation'. &lt;/p&gt;
&lt;p&gt;With all due respect to Cunningham, because the concept is so widely misunderstood and abused, it may be better to retire it. &lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:discovered"&gt;
&lt;p&gt;I discovered this image in &lt;a href="https://medium.com/swlh/technical-debt-isnt-real-8803ce021caa]"&gt;this blogpost&lt;/a&gt;.&amp;#160;&lt;a class="footnote-backref" href="#fnref:discovered" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:feature"&gt;
&lt;p&gt;if you are not working on a new feature, you are working on technical debt.&amp;#160;&lt;a class="footnote-backref" href="#fnref:feature" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:wrong"&gt;
&lt;p&gt;I think that Uncle Bob's definition of technical debt in this article is not correct. He also defines it basically as cutting corners for short-term gain.&amp;#160;&lt;a class="footnote-backref" href="#fnref:wrong" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:seef"&gt;
&lt;p&gt;See again Martin Fowlers &lt;a href="https://martinfowler.com/bliki/TechnicalDebtQuadrant.html"&gt;article&lt;/a&gt; about technical debt.&amp;#160;&lt;a class="footnote-backref" href="#fnref:seef" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Development"></category><category term="None"></category></entry><entry><title>This blog is now running on solar power</title><link href="https://louwrentius.com/this-blog-is-now-running-on-solar-power.html" rel="alternate"></link><published>2020-07-06T12:00:00+02:00</published><updated>2020-07-06T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-07-06:/this-blog-is-now-running-on-solar-power.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;This blog is now running on solar power. &lt;/p&gt;
&lt;p&gt;I've put a solar panel on my balcony, which is connected to a solar charge controller. This device charges an old worn-out car battery and provides power to a Raspberry Pi ~~3b+~~ 4B, which in turn powers this (static) website.&lt;/p&gt;
&lt;p&gt;For …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;This blog is now running on solar power. &lt;/p&gt;
&lt;p&gt;I've put a solar panel on my balcony, which is connected to a solar charge controller. This device charges an old worn-out car battery and provides power to a Raspberry Pi ~~3b+~~ 4B, which in turn powers this (static) website.&lt;/p&gt;
&lt;p&gt;For updates: scroll to the bottom of this article.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/solarpanelbalcony-large.jpg"&gt;&lt;img alt="solar" src="https://louwrentius.com/static/images/solarpanelbalcony-small.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Some statistics about the current status of the solar setup is shown in the sidebar to the right. The historical graph below is updated every few minutes (European time).&lt;/p&gt;
&lt;p&gt;&lt;img alt="solarstatus" src="https://louwrentius.com/solar/solar.png" /&gt;&lt;/p&gt;
&lt;h2&gt;Low-tech Magazine as inspiration&lt;/h2&gt;
&lt;p&gt;If you think you've seen a concept like this before, you are right. &lt;/p&gt;
&lt;p&gt;The website  &lt;a href="https://solar.lowtechmagazine.com/power.html"&gt;Low-tech Magazine&lt;/a&gt; is the inspiration for my effort. I would really recommend visiting this site because it goes to incredible length to make the site energy-efficient. For example, images are dithered to save on bandwidth!&lt;/p&gt;
&lt;p&gt;Low-tech Magazine &lt;a href="https://solar.lowtechmagazine.com/about.html#offline"&gt;goes off-line&lt;/a&gt; when there isn't enough sunlight and the battery runs out, which can happen after a few days of bad weather.&lt;/p&gt;
&lt;p&gt;In January 2020, the site shared &lt;a href="https://solar.lowtechmagazine.com/2020/01/how-sustainable-is-a-solar-powered-website.html"&gt;some numbers&lt;/a&gt; about the sustainability of the solar-powered website.&lt;/p&gt;
&lt;h2&gt;The build&lt;/h2&gt;
&lt;p&gt;My build is almost identical to that of Low-tech Magazine in concept, but not nearly as efficient. I've just performed a lift-and-shift of my blog from the cloud to a Raspberry Pi.&lt;/p&gt;
&lt;p&gt;I've build my setup based on some parts I already owned, such as the old car battery and the Pi. The solar panel and solar charge controller were purchased new. The LCD display and current/voltage sensor have been recycled from an earlier hobby project. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/solarcontrollerboard-large.jpg"&gt;&lt;img alt="controller" src="https://louwrentius.com/static/images/solarcontrollerboard-large.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I've used these parts:&lt;/p&gt;
&lt;table&gt;
&lt;tr&gt;&lt;td&gt;Solar Panel&lt;/td&gt;&lt;td&gt;Monocrystalline 150 Watt 12V&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Battery&lt;/td&gt;&lt;td&gt;12 Volt Lead Acid Battery (Exide 63Ah)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Solar Charge Controller&lt;/td&gt;&lt;td&gt;Victron BlueSolar MPPT 75|10&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Voltage/Current sensor&lt;/td&gt;&lt;td&gt;INA260&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;LCD Display&lt;/td&gt;&lt;td&gt;HD44780 20x4 &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Computer&lt;/td&gt;&lt;td&gt;Raspberry Pi 4B&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Communications cable&lt;/td&gt;&lt;td&gt;VE.Direct to USB interface&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;

&lt;h3&gt;The Solar Panel&lt;/h3&gt;
&lt;p&gt;The panel is extremely over-dimensioned because my balcony is directed towards the west, so it has only a few hours a day of direct sunlight. Furthermore, the angle of the solar panel is sub-optimal. &lt;/p&gt;
&lt;p&gt;My main concern will be the winter. It is not unlikely that during the winter, the panel will not be able to generate enough energy to power the Pi and charge the battery for the night.&lt;/p&gt;
&lt;p&gt;I have also noticed that under great sunlight conditions, the panel can easily 
produce 60+ Watt&lt;sup id="fnref:ideal"&gt;&lt;a class="footnote-ref" href="#fn:ideal"&gt;1&lt;/a&gt;&lt;/sup&gt; but the battery cannot ingest power that fast.&lt;/p&gt;
&lt;p&gt;I'm not sure about the actual brand of the panel, it was the cheapest panel I could find on Amazon for the rated wattage. &lt;/p&gt;
&lt;h3&gt;The Solar Charger&lt;/h3&gt;
&lt;p&gt;It's a standard solar charger made by Victron, for small solar setups (to power a shed or mobile home). I've bought the special data cable&lt;sup id="fnref:cable"&gt;&lt;a class="footnote-ref" href="#fn:cable"&gt;2&lt;/a&gt;&lt;/sup&gt; so I can get information such as voltage, current and power usage. &lt;/p&gt;
&lt;p&gt;&lt;img alt="chargecontroller" src="https://louwrentius.com/static/images/solarcontroller.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The controller uses a documented protocol called ve.direct. I'm using a &lt;a href="https://github.com/karioja/vedirect"&gt;Python module&lt;/a&gt; to obtain the data. &lt;/p&gt;
&lt;p&gt;According to the manual, this solar charger will assure that the battery is sufficiently charged and protects against deep discharge or other conditions that could damage the battery. &lt;/p&gt;
&lt;p&gt;I feel that this is a very high-quality product. It seems sturdy and the communications port (which even supports a bluetooth dongle) giving you access to the data is really nice. &lt;/p&gt;
&lt;p&gt;The controller is ever so slightly under-dimensioned for the solar panel, but since I will never get the theoretical full power of the panel due to the sub-optimal configuration, this should not be an issue. &lt;/p&gt;
&lt;h3&gt;The battery&lt;/h3&gt;
&lt;p&gt;In the day and age of Lithium-ion batteries it may be strange to use a Lead Acid battery. The fact is that this battery&lt;sup id="fnref:failed"&gt;&lt;a class="footnote-ref" href="#fn:failed"&gt;3&lt;/a&gt;&lt;/sup&gt; was free and - although too worn down for a car - can still power light loads for a very long time (days). And I could just hook up a few extra batteries to expand capacity (and increase solar energy absorption rates).&lt;/p&gt;
&lt;p&gt;To protect against short-circuits, the battery is protected by a fuse. This is critical because car batteries can produce so much current that they can be used for welding. They are dangerous.&lt;/p&gt;
&lt;p&gt;If you ever work with lead acid batteries, know this: don't discharge them beyond 50% of capacity, and ideally not beyond 70% of capacity. &lt;a href="https://batteryuniversity.com/learn/article/lead_based_batteries"&gt;The deeper the discharge, the lower the life expectancy&lt;/a&gt;. A 100% discharge of a lead acid battery will kill it very quickly.&lt;/p&gt;
&lt;p&gt;You may understand why Lead Acid batteries aren't that great for solar usage, because you need to buy enough of them to assure you never have to deep discharge them. &lt;/p&gt;
&lt;h3&gt;Voltage, Current and Power Sensor&lt;/h3&gt;
&lt;p&gt;I noticed that the load current sensor of the solar charge controller was not very precise, so I added an &lt;a href="https://learn.adafruit.com/adafruit-ina260-current-voltage-power-sensor-breakout"&gt;INA260&lt;/a&gt; based sensor. This sensor uses I2C for communication, just like the LCD display. It measures voltage, current and power in a reasonable presice resolution.&lt;/p&gt;
&lt;p&gt;Using the sensor is quite simple (&lt;em&gt;pip3 install adafruit-circuitpython-ina260&lt;/em&gt;):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;table class="highlighttable"&gt;&lt;tr&gt;&lt;td class="linenos"&gt;&lt;div class="linenodiv"&gt;&lt;pre&gt;&lt;span class="normal"&gt;1&lt;/span&gt;
&lt;span class="normal"&gt;2&lt;/span&gt;
&lt;span class="normal"&gt;3&lt;/span&gt;
&lt;span class="normal"&gt;4&lt;/span&gt;
&lt;span class="normal"&gt;5&lt;/span&gt;
&lt;span class="normal"&gt;6&lt;/span&gt;
&lt;span class="normal"&gt;7&lt;/span&gt;
&lt;span class="normal"&gt;8&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class="code"&gt;&lt;div&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="ch"&gt;#!/usr/bin/env python3&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;board&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;adafruit_ina260&lt;/span&gt;
&lt;span class="n"&gt;i2c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;board&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;I2C&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;ina260_L&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;adafruit_ina260&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;INA260&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i2c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;address&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ina260_L&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ina260_L&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;voltage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ina260_L&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;power&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Please note that this sensor is purely optional, the precision it provides is not really required. I've used this sensor to observe that the voltage and current sensing sensors of the solar charge controller are fairly accurate, except for that of the load, which only measures the current in increments of 100 mAh. &lt;/p&gt;
&lt;h3&gt;The LCD Display&lt;/h3&gt;
&lt;p&gt;The display has four lines of twenty characters and uses a HD44780 controller. It's dirt-cheap and uses the I2C bus for communications. By default, the screen is very bright, but I've used a resistor on a header for the backlight to lower the brightness.&lt;/p&gt;
&lt;p&gt;&lt;img alt="lcddisplay" src="https://louwrentius.com/static/images/lcddisplay.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I'm using the Python RPLCD library (&lt;em&gt;pip3 install RPLCD&lt;/em&gt;) for interfacing with the LCD display. &lt;/p&gt;
&lt;p&gt;Using an LCD display in any kind of project is very simple. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;table class="highlighttable"&gt;&lt;tr&gt;&lt;td class="linenos"&gt;&lt;div class="linenodiv"&gt;&lt;pre&gt;&lt;span class="normal"&gt;1&lt;/span&gt;
&lt;span class="normal"&gt;2&lt;/span&gt;
&lt;span class="normal"&gt;3&lt;/span&gt;
&lt;span class="normal"&gt;4&lt;/span&gt;
&lt;span class="normal"&gt;5&lt;/span&gt;
&lt;span class="normal"&gt;6&lt;/span&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class="code"&gt;&lt;div&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="ch"&gt;#!/usr/bin/env python3&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;RPLCD.i2c&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CharLCD&lt;/span&gt;
&lt;span class="n"&gt;lcd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;CharLCD&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;PCF8574&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mh"&gt;0x27&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cols&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rows&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;lcd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;clear&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;lcd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cursor_pos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# (line,column)&lt;/span&gt;
&lt;span class="n"&gt;lcd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;write_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Hello&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;12 volt to 5 Volt conversion&lt;/h3&gt;
&lt;p&gt;I'm just using a simple car cigarette lighter USB adapter to power the Raspberry Pi. I'm looking at a more power-efficient converter, although I'm not sure how much efficiency I'll be able to gain, if any.&lt;/p&gt;
&lt;p&gt;Update: I've replaced the cigarette lighter usb adapter device with a buck converter, which resulted in a very slight reduction in power consumption.&lt;/p&gt;
&lt;h2&gt;Script to collect data&lt;/h2&gt;
&lt;p&gt;I've written a small Python script to collect all the data. The data is send to two places:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It is send to Graphite/Grafana for nice charts (serves no real purpose)&lt;/li&gt;
&lt;li&gt;It is used to generate the infographic in the sidebar to the right &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Because I don't want to wear out the SD card of the Raspberry Pi, the stats as shown in the sidebar to the right is written to a folder that is mounted on tmpfs.&lt;/p&gt;
&lt;h2&gt;The cloud as backup&lt;/h2&gt;
&lt;p&gt;When you connect to this site, you connect to a VPS running HAProxy. HAproxy determines if my blog is up and if so, will proxy between you and the Raspberry Pi. If the battery would run out, HAProxy will redirect you an instance of my blog on the same VPS (where it was running for years).&lt;/p&gt;
&lt;p&gt;As you may understand, I still have to pay for the cloud VPS and that VPS also uses power. From an economical standpoint and from a ecological standpoint, this project may make little sense. &lt;/p&gt;
&lt;h2&gt;Possible improvements&lt;/h2&gt;
&lt;h3&gt;VPS on-demand&lt;/h3&gt;
&lt;p&gt;The obvious flaw in my whole setup is the need for a cloud VPS that is hosting HAProxy and a backup instance of my blog.&lt;/p&gt;
&lt;p&gt;A better solution would be to only spawn a cloud VPS on demand, when power is getting low. To move visitors to the VPS, the DNS records should be changed to point to the right IP-address, which could be done with a few API calls.&lt;/p&gt;
&lt;p&gt;I could also follow the example of Low-tech Magazine and just accept that my blog would be offline for some time, but I don't like that.&lt;/p&gt;
&lt;h3&gt;Switching to Lithium-ion&lt;/h3&gt;
&lt;p&gt;As long as the car battery is still fine, I have no reason to switch to Lithium-ion. I've also purchased a few smaller Lead Acid batteries just to test their real-life capacity, to support projects like these. Once the car battery dies, I can use those to power this project. &lt;/p&gt;
&lt;h3&gt;The rest of the network is not solar-powered&lt;/h3&gt;
&lt;p&gt;The switches, router and modem that supply internet access are not solar-powered. Together, these devices use significantly more power, which I cannot support with my solar setup. &lt;/p&gt;
&lt;p&gt;I would have to move to a different house to be able to install sufficient solar capacity. &lt;/p&gt;
&lt;h2&gt;Other applications&lt;/h2&gt;
&lt;p&gt;During good weather conditions, the solar panel provides way more power than is required to keep the battery charged and run the Raspberry Pi.&lt;/p&gt;
&lt;p&gt;I've used the excess energy to charge my mobile devices. Although I think that's fun, if I just forget turning off my lights or amplifier for a few hours, I would already waste most of my solar gains. &lt;/p&gt;
&lt;p&gt;I guess it's the tought that counts. &lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;In the end, it it was a fun hobby project for me to realise. I want to thank Low-tech Magazine for the idea, I had a lot of fun creating my (significantly worse) copy of it.&lt;/p&gt;
&lt;p&gt;If you have any ideas on how to improve this project, feel free to comment below or email me.&lt;/p&gt;
&lt;p&gt;This blog post featured on &lt;a href="https://news.ycombinator.com/item?id=23796692"&gt;hacker news&lt;/a&gt; and the Pi 3b+ had no problems handling the load.&lt;/p&gt;
&lt;h2&gt;Updates&lt;/h2&gt;
&lt;h3&gt;Car battery died&lt;/h3&gt;
&lt;p&gt;After about two weeks the old and worn-down car battery finally died. Even after a whole day of charging, the voltage of the battery dropped to 11.5 Volts in about a minute. It would no longer hold a charge. &lt;/p&gt;
&lt;p&gt;I have quite a lot of spare 12 volt 7Ah batteries that I can use as a replacement. I'm now using four of those batteries (older ones) in parallel.&lt;/p&gt;
&lt;h3&gt;Added wall charger as backup power (October 2020)&lt;/h3&gt;
&lt;p&gt;As we approached fall, the sun started to set earlier and earlier. The problem with my balcony is that I only have direct sunlight at 16:00 until sunset. My solar panel was therefore unable to keep the batteries charged. &lt;/p&gt;
&lt;p&gt;I even added a smaller 60 watt solar panel I used for earlier tests in parallel to gain a few extra watts, but that didn't help much. &lt;/p&gt;
&lt;p&gt;It is now at a point where I think it's reasonable to say that the project failed in my particular case. However, I do believe it would still be fine if I could capture the sun during the whole day (if my balcony wasn't in such a bad spot, the solar panel would be able to keep up). &lt;/p&gt;
&lt;p&gt;As the batteries were draining I decided to implement a backup power solution, to protect the batteries. It's bad for lead acid batteries to be in a discharged state for a long time. &lt;/p&gt;
&lt;p&gt;Therefore, I'm now using a battery charger that is connected to a relais that my software is controlling. If the voltage drops below 12.00 volt, it will start charging the batteries for 24 hours.&lt;/p&gt;
&lt;h3&gt;Upgraded Raspberry Pi 3b+ to a Raspberry Pi 4B (May 2022)&lt;/h3&gt;
&lt;p&gt;The idle power usage of the 4B is almost the same as the 3b+ model, although it requires &lt;a href="https://forums.raspberrypi.com/viewtopic.php?t=257144"&gt;some tweaking&lt;/a&gt; to reduce power usage. &lt;/p&gt;
&lt;p&gt;The old 3b+ continuously complained about under voltage (detected), but the Pi4 seems to be less picky and it works fine with the XY-3606 power converter (12V to 5V).&lt;/p&gt;
&lt;p&gt;&lt;img alt="xy3606" src="https://louwrentius.com/static/images/xy3606.jpg" /&gt;&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:ideal"&gt;
&lt;p&gt;the position of the panel is not optimal, so I will never get the panel's full potential.&amp;#160;&lt;a class="footnote-backref" href="#fnref:ideal" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:cable"&gt;
&lt;p&gt;You don't have to buy the cable supplied by Victron, it's possible to create your own. The cable is not proprietary.&amp;#160;&lt;a class="footnote-backref" href="#fnref:cable" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:failed"&gt;
&lt;p&gt;It failed. Please read the update at the bottom of this article.&amp;#160;&lt;a class="footnote-backref" href="#fnref:failed" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Solar"></category><category term="solar"></category></entry><entry><title>Don't be afraid of RAID</title><link href="https://louwrentius.com/dont-be-afraid-of-raid.html" rel="alternate"></link><published>2020-05-22T12:00:00+02:00</published><updated>2020-05-22T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-05-22:/dont-be-afraid-of-raid.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I sense this sentiment on the internet that RAID is dangerous, that the likelihood of your RAID array failing during a rebuild is almost a certainty, because hard drives have become so large.&lt;/p&gt;
&lt;p&gt;I think nothing is further from the truth and I would like to dispel this myth …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I sense this sentiment on the internet that RAID is dangerous, that the likelihood of your RAID array failing during a rebuild is almost a certainty, because hard drives have become so large.&lt;/p&gt;
&lt;p&gt;I think nothing is further from the truth and I would like to dispel this myth.&lt;/p&gt;
&lt;p&gt;Especially for home users and small businesses, RAID arrays are still a reliable and efficient way of storing a lot of data in a single place. &lt;/p&gt;
&lt;h2&gt;Perception of RAID reliability&lt;/h2&gt;
&lt;p&gt;There are many horror stories to be found on the internet about people at home losing their RAID array. These stories may have contributed to a negative attitude towards RAID in general. &lt;/p&gt;
&lt;p&gt;You may acuse me of victim blaming, but in many cases, I do wonder if those incidents were due to user error&lt;sup id="fnref:usererror"&gt;&lt;a class="footnote-ref" href="#fn:usererror"&gt;1&lt;/a&gt;&lt;/sup&gt;, due to bad luck or actual RAID causing problems. And there is a bias in reporting: you won't hear from the countless people who have no issues.&lt;/p&gt;
&lt;p&gt;In any case, the damage is done, but I still think (software) RAID is perfectly fine. &lt;/p&gt;
&lt;h2&gt;The myth about the Unrecoverable Read Error (URE)&lt;/h2&gt;
&lt;p&gt;I think the trouble started with this &lt;a href="https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/#"&gt;terrible article on ZDNET&lt;/a&gt; from 2007.&lt;/p&gt;
&lt;p&gt;In this article, it's argued that as drives become bigger, but not more &lt;em&gt;reliable&lt;/em&gt;, you will see more unrecoverable read errors (UREs). More capacity means more sectors, so more risk of one of them going bad.&lt;/p&gt;
&lt;p&gt;An URE is an incident where the hard drive can't read a sector&lt;sup id="fnref:sector"&gt;&lt;a class="footnote-ref" href="#fn:sector"&gt;5&lt;/a&gt;&lt;/sup&gt;. 
For old people like me, that sounds like the definition of a &lt;strong&gt;'bad sector'&lt;/strong&gt;. 
The article argues that on average you would encounter an URE for every 12.5 TB of data read. &lt;/p&gt;
&lt;p&gt;By the logic of the ZDNET acticle, just copying all data from a 14 TB drive would probably be impossible, because you would probably hit an URE / bad sector before you finish your copy.&lt;/p&gt;
&lt;p&gt;This is a very big issue for RAID arrays. A RAID array rebuild consists of reading the contents of all remaining drives in their entirety&lt;sup id="fnref:zfspartial"&gt;&lt;a class="footnote-ref" href="#fn:zfspartial"&gt;2&lt;/a&gt;&lt;/sup&gt;. So you are &lt;em&gt;guaranteed&lt;/em&gt; to hit an URE during a RAID rebuild. &lt;/p&gt;
&lt;p&gt;The good news is that you don't have to worry about any of this. Because it is &lt;em&gt;not true&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Hard drives are not &lt;em&gt;that&lt;/em&gt; unreliable in practice. On the contrary. They are remarkably reliable, I would say. Just look at the &lt;a href="https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2020/"&gt;Backblaze drive statistics&lt;/a&gt;&lt;sup id="fnref:dc"&gt;&lt;a class="footnote-ref" href="#fn:dc"&gt;6&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;The prediction of the infamous ZDNET article has not come true. The URE specification for hard drive describes a worst-case scenario and seem to be more about marketing (a way to differentiate enterprise drives from consumer drives) than about reality.&lt;/p&gt;
&lt;p&gt;If the ZDNET article were true, I, myself, should have encountered many UREs because of the many RAID array scrubs/patrol reads that have completed acros various RAID arrays.&lt;/p&gt;
&lt;p&gt;RAID has never stopped working and is still going strong.&lt;/p&gt;
&lt;p&gt;&lt;a class="embedly-card" href="https://www.reddit.com/r/DataHoarder/comments/515l3t/the_hate_raid5_gets_is_uncalled_for/d79xvls"&gt;Card&lt;/a&gt;&lt;/p&gt;
&lt;script async src="//embed.redditmedia.com/widgets/platform.js" charset="UTF-8"&gt;&lt;/script&gt;

&lt;h2&gt;Scrubbing protects against the impact of bad sectors&lt;/h2&gt;
&lt;p&gt;When a drive fails in a RAID array that can only tollerate one drive failure, it's very important that all remaining drives won't encounter any read errors. Because redundancy is lost, any read errors due to bad sectors could mean that the entire array is lost or at least some files are corrupted&lt;sup id="fnref:zfsisbetter"&gt;&lt;a class="footnote-ref" href="#fn:zfsisbetter"&gt;7&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Every RAID array supports 'scrubbing'. It's a process where every sector of the RAID array is read, which in effect causes all sectors of all hard drives to be read. &lt;/p&gt;
&lt;p&gt;A scrub is a process to check for bad sectors in advance. If bad sectors are found on a hard drive, the drive can be replaced so it will not cause problems during a potential future rebuild. Replacing the drive itself will cause a rebuild, but assuming the scrub didn't find any other drives with bad sectors, that rebuild will be fine.&lt;/p&gt;
&lt;p&gt;A RAID array that doesn't undergo a regular scrub is a disaster waiting to happen. Bad sectors may be building up on one of the other drivs and when a drive actually fails, the entire array may be lost because of the undetected bad sectors on (one of) the remaining drives. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;If you want to store data in a reliable way on a RAID array, you need to assure the array is scrubbed periodically.&lt;/em&gt; And even if you don't use RAID, I would recommend running a long SMART test once a month against every hard drive you own.&lt;/p&gt;
&lt;p&gt;By default, a Linux software RAID array is scrubbed once a week on Ubuntu. For details, look at the contents of /etc/cron.d/mdadm.&lt;/p&gt;
&lt;p&gt;If you use ZFS on Linux, your array is automatically scrubbed on the second Sunday of every month if you run Ubuntu.&lt;/p&gt;
&lt;p&gt;NAS vendors like Synology or QNAP have data scrubs enabled by default. Consider the manual of your particular NAS to adjust the frequency. I would recommend to scrub at least once a month and at night. &lt;/p&gt;
&lt;h2&gt;Why is RAID 5 considered harmful?&lt;/h2&gt;
&lt;p&gt;Frankly, &lt;a href="https://louwrentius.com/raid-5-is-perfectly-fine-for-home-usage.html"&gt;I wonder that too&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;I notice a lot of people on the internet claiming that you should &lt;em&gt;never&lt;/em&gt; use RAID 5 but I disagree. It all depends on the circumstances. Finding a balance between cost and risk is important.&lt;/p&gt;
&lt;p&gt;This &lt;a href="http://www.baarf.dk"&gt;page dating back to 2003&lt;/a&gt; advocated not to use RAID 5 but that's focused on the enterprise environment and even there I see its uses.&lt;/p&gt;
&lt;p&gt;For &lt;em&gt;small&lt;/em&gt; RAID arrays with five or less drives I think RAID 5 is still a great fit. Especially if you run a small 4-bay NAS it would make total sense to use RAID 5. You get a nice balance between capacity and the cost of availability. &lt;/p&gt;
&lt;p&gt;It's not really recommended to create larger RAID 5 arrays. Compared to a single drive, a RAID array with 8 drives is 8 times more likely to experience a drive failure. You multiply the risk of a single drive failing by eight. With larger arrays, double drive failure becomes a serious risk.&lt;/p&gt;
&lt;p&gt;This is why it's really recommended to use RAID 6 for larger RAID arrays, because RAID 6 can tollerate two simultaneous drive failures. I've used RAID 6 in the past and I use RAIDZ2 (ZFS) as the basis for my current NAS.&lt;/p&gt;
&lt;p&gt;I also run an 8-drive RAID 5 in one of my servers that hosts &lt;em&gt;not so important data&lt;/em&gt; that I still want to keep around and would rather not lose, but not at every cost. It's all about a balance between risk and cost. Please also read the postscript of this post, you will like it.&lt;/p&gt;
&lt;p&gt;It is true that during a rebuild, hard drives are strained more, but unless the RAID array is also in heavy use, the load on the drive isn't that big: the data is read sequentially, which is quite easy on the drives.&lt;/p&gt;
&lt;p&gt;RAID &lt;em&gt;rebuild performance&lt;/em&gt; is mostly determined by the size of the drives and not by the number of drives in the RAID array&lt;sup id="fnref:zfsslow"&gt;&lt;a class="footnote-ref" href="#fn:zfsslow"&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Years ago I ran a 20-drive RAID 6 based on 1 TB drives and it did a rebuild in 5 hours. Recently I tested a rebuild of 8 drives in RAID 5 (using the same drives) and it also took almost 5 hours (4H45M). &lt;/p&gt;
&lt;h2&gt;The RAID write hole&lt;/h2&gt;
&lt;p&gt;The RAID 5/6 'write hole' is often mentioned as something you should be afraid about. &lt;/p&gt;
&lt;p&gt;Parity-based RAID like RAID 5 and RAID 6 may be affected by an issue called the '&lt;a href="https://serverfault.com/questions/844791/write-hole-which-raid-levels-are-affected"&gt;write hole&lt;/a&gt;'. To (over)simplify: if a computer would experience a sudden power failure, a write to the RAID array may be interrupted. This could cause a partial write to the RAID array, leaving it in an inconsistent state.&lt;/p&gt;
&lt;p&gt;As a side note, I would always recommend protecting your NAS with a UPS (battery backup) so your server can shut down in a clean way, before power is lost as the battery runs out. &lt;/p&gt;
&lt;p&gt;ZFS RAIDZ is not affected by the 'write hole' issue, because it writes data to a log first before writing it to the actual array&lt;sup id="fnref:performance"&gt;&lt;a class="footnote-ref" href="#fn:performance"&gt;4&lt;/a&gt;&lt;/sup&gt;.  &lt;/p&gt;
&lt;p&gt;Linux MDADM software RAID also is protected against the 'write hole' phenomenon by using a &lt;a href="https://louwrentius.com/the-impact-of-the-mdadm-bitmap-on-raid-performance.html"&gt;bitmap&lt;/a&gt; (which is enabled by default&lt;sup id="fnref2:performance"&gt;&lt;a class="footnote-ref" href="#fn:performance"&gt;4&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;
&lt;p&gt;Hardware RAID is also protected against this by using a battery backup for the cache memory. The data in the cache memory is written to disk as soon as the computer is powered back on.&lt;/p&gt;
&lt;h2&gt;Setup alerting if you care about your data&lt;/h2&gt;
&lt;p&gt;I think that a lot of RAID horror stories are due to the fact that people may never notice any problems until it is too late because they never set up any kind of alerting (by email or other).&lt;/p&gt;
&lt;p&gt;Ideally, you would also make sure your system monitors the SMART data of your hard drives and alert when critical numbers start to rise (Reallocated Sector count and Current Pending Sector count).&lt;/p&gt;
&lt;p&gt;This is also a moment of personal reflection. Do you run a RAID array? Did you setup alerting? Or could your RAID array be failing this &lt;em&gt;very moment&lt;/em&gt; and you wouldn't know?&lt;/p&gt;
&lt;p&gt;Anyway: I think a lack of proper alerting is a nice way of getting into trouble with RAID, but that's not on RAID. Any storage solution that is not monitored is just a disaster waiting to happen.&lt;/p&gt;
&lt;h2&gt;Why people choose not to use RAID&lt;/h2&gt;
&lt;p&gt;If a RAID array fails, all data is lost. Some people are not comfortable with this risk. They would rather lose the contents of some drives, but not all of them. &lt;/p&gt;
&lt;p&gt;Solutions like &lt;a href="https://unraid.net"&gt;Unraid&lt;/a&gt; and &lt;a href="https://www.snapraid.it"&gt;SnapRAID&lt;/a&gt; use one or more dedicated hard drives to store redundant (parity) data. The other hard drives are formatted with your filesystem of choice and can be accessed as normal hard drives. Altough I have no experience with this product, &lt;a href="https://stablebit.com/DrivePool/Overview"&gt;StableBit DrivePool&lt;/a&gt; seems to work in a similar manner.&lt;/p&gt;
&lt;p&gt;If you would have six hard drives, thus five data drives and one parity disk, the loss of two drives would result in data loss, as with RAID 5. However, the data on the remaining four drives would still be intact. The data loss is limited to just one drive worth of data.&lt;/p&gt;
&lt;p&gt;The 'all-or-nothing' risk associated with regular software RAID is thus mitigated. I myself don't think those risks aren't that large, but Unraid and snapraid are popular product and I think they are reasonable alternatives. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/trapexit/mergerfs"&gt;Mergerfs&lt;/a&gt; could also be an interesting option, although it only supports mirroring.&lt;/p&gt;
&lt;h2&gt;Backups are still important&lt;/h2&gt;
&lt;p&gt;Storing your data on any kind of RAID array is &lt;em&gt;never&lt;/em&gt; a substitute for a backup. &lt;/p&gt;
&lt;p&gt;You should still copy your data to some other storage if you want to protect your data. You may chose to only make a backup of a subset of all of the data, but at least you take an informed risk.&lt;/p&gt;
&lt;h2&gt;Evaluation&lt;/h2&gt;
&lt;p&gt;I hope I have demonstrated why RAID is still a valid and reliable option for data storage. &lt;/p&gt;
&lt;p&gt;Feel free to share your own views in the comments. &lt;/p&gt;
&lt;h2&gt;P.S.&lt;/h2&gt;
&lt;p&gt;I ran a scrub on my 8-disk RAID 5 array (based on 2 TB drives) as I was writing this article. My servers are only powered on when I need them and while powered off, it's easy for them to miss their periodic scrub window.&lt;/p&gt;
&lt;p&gt;So as to practice what I preach I ran a scrub. Lo and behold, one of the drives was kicked out of my Linux software RAID array. Don't you love the irony?&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sd 0:0:4:0: [sde] tag#29 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 0:0:4:0: [sde] tag#29 Sense Key : Medium Error [current] 
sd 0:0:4:0: [sde] tag#29 Add. Sense: Unrecovered read error
sd 0:0:4:0: [sde] tag#29 CDB: Read(10) 28 00 9f 42 9e 30 00 04 00 00
print_req_error: critical medium error, dev sde, sector 2671943216
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Followed by:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;md/raid:md6: Disk failure on sde, disabling device.
md/raid:md6: Operation continuing on 7 devices.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The drive was clearly kicked out because the drive encountered bad sectors. A quick check of the SMART data revealed more than 300+ sectors were already remapped, but the data stored in them could not be recovered, causing read errors.&lt;/p&gt;
&lt;p&gt;This drive is clearly done, although it was still operational. &lt;/p&gt;
&lt;p&gt;After swapping this defective drive with a spare replacement, I started the rebuild proces, which took four hours and twenty minutes. My RAID 5 has rebuild and is now perfectly fine.&lt;/p&gt;
&lt;p&gt;If an event like this doesn't drive the point home that scrubs are important, I don't know what will.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:usererror"&gt;
&lt;p&gt;Sometimes I read what hardware people use for storage and I think about this quote by John Glenn: &lt;em&gt;‘I felt exactly how you would feel if you were getting ready to launch and knew you were sitting on top of 2 million parts — all built by the lowest bidder on a government contract.’&lt;/em&gt;&amp;#160;&lt;a class="footnote-backref" href="#fnref:usererror" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:zfspartial"&gt;
&lt;p&gt;ZFS works differently, it only reads the sectors containing actual data.&amp;#160;&lt;a class="footnote-backref" href="#fnref:zfspartial" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:zfsslow"&gt;
&lt;p&gt;ZFS rebuilds or 'resilvers' &lt;a href="https://louwrentius.com/zfs-resilver-performance-of-various-raid-schemas.html"&gt;become slower&lt;/a&gt; as you add more drives to a RAIDZ(2/3) VDEV, it seems. I'm not sure this is still the case with more recent ZFS versions.&amp;#160;&lt;a class="footnote-backref" href="#fnref:zfsslow" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:performance"&gt;
&lt;p&gt;Both ZFS and MDADM will take a performance hit by using a log/bitmap. Both solutions support using an SSD to accelerate the log/bitmap to remove this performance hit. Most home users probably won't need this.&amp;#160;&lt;a class="footnote-backref" href="#fnref:performance" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;a class="footnote-backref" href="#fnref2:performance" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:sector"&gt;
&lt;p&gt;The smallest unit of storage a drive can store, often 4K or 512 bytes for older, smaller drives.&amp;#160;&lt;a class="footnote-backref" href="#fnref:sector" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:dc"&gt;
&lt;p&gt;Those hard drive live in a datacenter with a conditioned environment, which you probably don't have at home. But as long as you keep the temperature of hard drive within limits, I don't think it matters that much.&amp;#160;&lt;a class="footnote-backref" href="#fnref:dc" title="Jump back to footnote 6 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:zfsisbetter"&gt;
&lt;p&gt;ZFS is both a RAID solution and a filesystem in one and can tell you exactly which file is affected. A nice feature.&amp;#160;&lt;a class="footnote-backref" href="#fnref:zfsisbetter" title="Jump back to footnote 7 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="storage RAID"></category></entry><entry><title>What home NAS builders should understand about silent data corruption</title><link href="https://louwrentius.com/what-home-nas-builders-should-understand-about-silent-data-corruption.html" rel="alternate"></link><published>2020-04-23T12:00:00+02:00</published><updated>2020-04-23T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-04-23:/what-home-nas-builders-should-understand-about-silent-data-corruption.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;When it comes to dealing with storage in a DIY NAS context, two important topics come up:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Unrecoverable read errors (UREs) or what old people like me call 'bad sectors'&lt;/li&gt;
&lt;li&gt;Silent data corruption (data corruption unnoticed by the storage layers)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I get a strong impression that people tend to …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;When it comes to dealing with storage in a DIY NAS context, two important topics come up:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Unrecoverable read errors (UREs) or what old people like me call 'bad sectors'&lt;/li&gt;
&lt;li&gt;Silent data corruption (data corruption unnoticed by the storage layers)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I get a strong impression that people tend to confuse those concepts. However, they often come up when people evaluate their options when they want to buy or  build their own do-it-yourself NAS. &lt;/p&gt;
&lt;p&gt;In this article, I want to make a clear distinction between the two and assess their risk. This may help you evaluating these risks and make an informed decision.&lt;/p&gt;
&lt;h2&gt;Unrecoverable read errors (due to bad sectors)&lt;/h2&gt;
&lt;p&gt;When a hard drive hits a 'bad sector', it means that it can't read the contents of that particular sector anymore. &lt;/p&gt;
&lt;p&gt;If the hard drive is unable to read that data even after multiple attempts, the operating system will return an Unrecoverable Read Error (URE).&lt;/p&gt;
&lt;p&gt;This is an example (on Linux) of a drive experiencing read errors, as pulled from /var/log/syslog (culled a bit for readability):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sd 0:0:0:0: [sda] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 0:0:0:0: [sda] tag#19 Sense Key : Medium Error [current] 
sd 0:0:0:0: [sda] tag#19 Add. Sense: Unrecovered read error
sd 0:0:0:0: [sda] tag#19 CDB: Read(10) 28 00 02 1c 8c 00 00 00 98 00
blk_update_request: critical medium error, dev sda, sector 35425280 op 0x0:(READ)
sd 0:0:0:0: [sda] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 0:0:0:0: [sda] tag#16 Sense Key : Medium Error [current] 
sd 0:0:0:0: [sda] tag#16 Add. Sense: Unrecovered read error
sd 0:0:0:0: [sda] tag#16 CDB: Read(10) 28 00 02 1c 8d 00 00 00 88 00
blk_update_request: critical medium error, dev sda, sector 35425536 op 0x0:(READ)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If a sector cannot be read, the data stored in that sector is lost. And in my experience, if you encounter a single bad sector, soon, there will be more. So if this happens, it's time to replace the hard drive. &lt;/p&gt;
&lt;p&gt;We use RAID to protect against drive failure. RAID (no matter the implementation) also can deal with 'partial failure' such as a drive encountering bad sectors. &lt;/p&gt;
&lt;p&gt;In a RAID array, a drive encountering unrecoverable read errors is just kicked out of the array, so it doesn't 'hang' or 'stall' the entire RAID array. &lt;/p&gt;
&lt;p&gt;Please note that this behaviour does depend on the particular RAID solution of choice&lt;sup id="fnref:myobservation"&gt;&lt;a class="footnote-ref" href="#fn:myobservation"&gt;1&lt;/a&gt;&lt;/sup&gt;. The point is though that bad sectors or UREs are a common event and RAID solutions can deal with them properly.&lt;/p&gt;
&lt;p&gt;The real problem with bad sectors (resulting in UREs) is that they can remain undiscovered until it is too late. So to uncover them in an early state, it's &lt;em&gt;very important&lt;/em&gt; to run regular data scrubs. I've &lt;a href="https://louwrentius.com/scrub-your-nas-hard-drives-regularly-if-you-care-about-your-data.html"&gt;written an article&lt;/a&gt; specifically about this topic. &lt;/p&gt;
&lt;h2&gt;Silent data corruption&lt;/h2&gt;
&lt;p&gt;An unrecoverable read error means that we can't read (a portion of) a file. Although it is unfortunate - because we better have an &lt;em&gt;intact&lt;/em&gt; backup of that file - we are also fortunate. &lt;/p&gt;
&lt;p&gt;Why are we fortunate?&lt;/p&gt;
&lt;p&gt;We are fortunate because the storage system - the hard drive and in turn the operating system - reported an error. We were able to diagnose the problem an take action. &lt;/p&gt;
&lt;p&gt;But it is possible that bits and bytes get mangled without your hard drive, SATA controller or operating system noticing. Somewhere, somehow, a bit is read or transmitted as a 1 where it should have been a 0. &lt;/p&gt;
&lt;p&gt;This is &lt;em&gt;really bad&lt;/em&gt;, because this &lt;em&gt;data corruption&lt;/em&gt; is &lt;strong&gt;undetected&lt;/strong&gt;, it is '&lt;em&gt;silent&lt;/em&gt;', there is no notification. &lt;/p&gt;
&lt;p&gt;Because imagine what happens: the corrupted file is happily backed up by your backup software, because it's unaware that anything is wrong. And by the time you discover the data corruption, the original pristine file is no longer part of the backup (rotated out). You are left with a lot of backups of a corrupted file. We encounter dataloss.&lt;/p&gt;
&lt;p&gt;This is one of the scariest kinds of data loss. Because it's very difficult to detect. You'll have to constantly calculate the checksum of a file and verify it's still ok. &lt;/p&gt;
&lt;p&gt;And that's - although rather simplified - exactly what &lt;a href="https://en.wikipedia.org/wiki/ZFS"&gt;ZFS&lt;/a&gt; does (amongst many other things). ZFS uses checksums at the block-level and thus assures with every read if the data contained in the block is still valid. ZFS is one of the few file systems that has this very powerfull feature (BTRFS is another example).  &lt;/p&gt;
&lt;p&gt;Regular RAID arrays (be it hardware-based or software-based) cannot detect silent data corruption (although it could be possible with RAID6). So it must be clear that ZFS is capable of protecting against a risk 'regular' RAID cannot cope with. &lt;/p&gt;
&lt;h2&gt;Is silent data corruption a significant threat for home DIY NAS builders?&lt;/h2&gt;
&lt;p&gt;Although silent data corruption is a very scary threat, from what I can tell there is no significant &lt;em&gt;independant&lt;/em&gt; evidence that the risk of &lt;em&gt;silent&lt;/em&gt; data corruption is so high that the &lt;em&gt;average home DIY NAS builder&lt;/em&gt; should take this risk into account&lt;sup id="fnref:notwrong"&gt;&lt;a class="footnote-ref" href="#fn:notwrong"&gt;2&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;Maybe I'm wrong, but I think many people mistakenly confuse UREs or unrecoverable read errors (caused by bad sectors) with &lt;em&gt;silent&lt;/em&gt; data corruption. And I think that's wrong, because there's nothing &lt;em&gt;silent&lt;/em&gt; about an unrecoverable read error.&lt;/p&gt;
&lt;p&gt;The truth is that hard drives are in fact very reliable when it comes to &lt;em&gt;silent&lt;/em&gt; data corruption, because they make heavy use of error detection and correction algoritms. A significant portion of the raw capacity of a hard drive is sacrificed to store redundant information to aid in detecting and correcting data corruption. According to &lt;a href="https://en.wikipedia.org/wiki/Hard_disk_drive#Error_rates_and_handling"&gt;wikipedia&lt;/a&gt;, hard drives used &lt;a href="https://en.wikipedia.org/wiki/Reed–Solomon_error_correction"&gt;Reed-Solomon&lt;/a&gt; error correction in the past and more modern drives use &lt;a href="https://en.wikipedia.org/wiki/Low-density_parity-check_code#Applications_2"&gt;LDPC&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;These error correction codes asure data integrity. Although 'soft' read errors may occur, there is enough additional redundant information stored on the hard drive to detect errors and even reconstruct the data (to some extend). Your hard drive handles this all by itself, it's part of normal operation. &lt;/p&gt;
&lt;p&gt;So this is my point: it's important to understand that there is a lot of protection against silent data corruption in a hard drive. The risk of silent data corruption is therefore small&lt;sup id="fnref:memory"&gt;&lt;a class="footnote-ref" href="#fn:memory"&gt;3&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;Sometimes the read data is so garbled that even the error correction codes cannot reconstruct the data as it was originally stored and that's what we then experience as an unrecoverable read error. But the disk notices! And it will report it!. This is not silent at all!&lt;/p&gt;
&lt;p&gt;To really create silent data corruption, something very special need to happen. And to be very clear: &lt;em&gt;such events do happen&lt;/em&gt;. But they are very rare. &lt;/p&gt;
&lt;p&gt;Somehow, a bit must flip and this event is not detected by the error correction algorithm. Maybe the bit flipped in the hard drive cache memory when it was read from the drive. Maybe it flipped during transport over the SATA cable.&lt;/p&gt;
&lt;p&gt;But it's fun to realise that the SATA protocol also has error detection embedded in the protocol for reliable data transmission. It's error detection and correction &lt;em&gt;all the way down&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The risk that &lt;em&gt;silent&lt;/em&gt; data corruption happens is thus very small, especially for home users.&lt;/p&gt;
&lt;p&gt;Again, make no mistake: the risk is real and storage solutions for larger scale storage solutions (SANs / Storage arrays) with hundreds, thousands or tens of thousands of drives do really have to take into account the risk of silent data corruption. At scale, even very small risks become a certainty.&lt;/p&gt;
&lt;p&gt;Enterprise storage solutions often employ their own proprietary solutions to protect against silent data corruption. Although it depend on the particular solution&lt;sup id="fnref:sector"&gt;&lt;a class="footnote-ref" href="#fn:sector"&gt;4&lt;/a&gt;&lt;/sup&gt;, it's often part of the storage array. ZFS was revolutionary because they put the data integrity checking in the filesystem itself.&lt;/p&gt;
&lt;p&gt;So if you think the risk of silent data corruption is still high enough that you should protect yourself against it, I would recommend to consider using &lt;a href="https://louwrentius.com/please-use-zfs-with-ecc-memory.html"&gt;ECC memory&lt;/a&gt;  to protect against corrupted data in memory. To be frank: I consider non-ECC memory a more likely cause of silent data corruption than the storage subsystem, which already employs all these error detection and correction algoritms. Non-ECC memory is totally unprotected.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Anekdote&lt;/strong&gt;: I myself run a &lt;a href="https://louwrentius.com/71-tib-diy-nas-based-on-zfs-on-linux.html"&gt;24-drive NAS&lt;/a&gt; based on ZFS and it has been rock-solid for 6 years straight. &lt;/p&gt;
&lt;p&gt;&lt;img alt="mynasimage" src="https://louwrentius.com/static/images/zfsnas01.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;From time to time, I do run disk 'scrubs', which can take quite some time. Although I have many terrabytes of data protected by ZFS, not a single instance of silent data corruption has been detected. And I have performed so many scrubs that I've read more than a &lt;em&gt;petabyte&lt;/em&gt; worth of data. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Anekdote&lt;/strong&gt;: Somebody made a mistake and used the &lt;a href="https://changelog.complete.org/archives/9769-silent-data-corruption-is-real"&gt;wrong type of cable&lt;/a&gt; to connect the hard drives to the HBA controller card. This caused actual silent data corruption. Because that person was running ZFS, it was detected so ZFS saved his data. This an example where ZFS did protect a person against silent data corruption.&lt;/p&gt;
&lt;h2&gt;Evaluation&lt;/h2&gt;
&lt;p&gt;I hope that the difference between unrecoverable read errors and silent data corruption is clear and that we should not confuse the two. They have different risk profiles associated with them.&lt;/p&gt;
&lt;p&gt;Furthermore, I have argued that silent data corruption is &lt;em&gt;real&lt;/em&gt; and a serious issue &lt;em&gt;at scale&lt;/em&gt;, and that it is that is dealt with accordingly. &lt;/p&gt;
&lt;p&gt;However, I've also argued that unless you are a home user running a small datacenter inside your basement, the risk of silent data corruption is so small that it is &lt;em&gt;reasonable&lt;/em&gt; to accept the risk as a DIY NAS builder and not seek specific protection against it.&lt;/p&gt;
&lt;p&gt;The decision is up to you. If you want to go with ZFS and protect against silent data corruption, you should also be aware and accept the &lt;a href="https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html"&gt;cost of ZFS&lt;/a&gt;. I myself have accepted that cost for my own NAS, but it's OK if you don't. If you care about silent data corruption so much, please also consider using &lt;a href="https://louwrentius.com/please-use-zfs-with-ecc-memory.html"&gt;ECC-memory&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;But in my opinion, you are &lt;em&gt;not&lt;/em&gt; taking an unreasonable risk if you chose to go with Unraid, Snapraid, Linux kernel RAID, Windows Storage Spaces or maybe other options in the same vein. I would say that this is &lt;em&gt;reasonable&lt;/em&gt; and up to you. &lt;/p&gt;
&lt;p&gt;Remember: the famous vendors of home user NAS boxes all seem to use regular &lt;a href="https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/What_is_Synology_Hybrid_RAID_SHR"&gt;Linux kernel RAID&lt;/a&gt; under the hood. And they seem to think that's fine.&lt;/p&gt;
&lt;p&gt;In the end, what really matters is a solution that suits your needs and also fits your budget and level of expertise. Can you fix problems when something goes wrong?&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:myobservation"&gt;
&lt;p&gt;I've noticed while testing with this particular drive that the drive was not kicked out of the array, and it just kept trying to read, grinding the Linux software RAID array to a halt. Removing the drive from the array fixed this. There is a 'failfast' option that only works with RAID1 or RAID10.&amp;#160;&lt;a class="footnote-backref" href="#fnref:myobservation" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:notwrong"&gt;
&lt;p&gt;I don't want to suggest in any way that it would be &lt;em&gt;wrong&lt;/em&gt; to take silent data corruption into account, but just to say I think it's not mandatory to really fret over it.&amp;#160;&lt;a class="footnote-backref" href="#fnref:notwrong" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:memory"&gt;
&lt;p&gt;The most significant risk is that enterprise grade hard drives use on-board ECC cache memory, whereas consumer drives use &lt;em&gt;non-ECC&lt;/em&gt; cache memory. So silently corrupted data in the cache memory of the drive could be a risk.&amp;#160;&lt;a class="footnote-backref" href="#fnref:memory" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:sector"&gt;
&lt;p&gt;Storage vendors often choose to reformat har drives with larger &lt;a href="https://www.fujitsu.com/downloads/strsys/system/dx_s3_Oracle_Linux_T10_PI_E16G_en_011.pdf"&gt;sector sizes&lt;/a&gt;&lt;sup id="fnref:af2"&gt;&lt;a class="footnote-ref" href="#fn:af2"&gt;5&lt;/a&gt;&lt;/sup&gt;. Those larger sectors then also incorporate additional checksum data to better protect against data corruption or unrecoverable read errors.&amp;#160;&lt;a class="footnote-backref" href="#fnref:sector" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:af2"&gt;
&lt;p&gt;https://www.seagate.com/files/staticfiles/docs/pdf/whitepaper/safeguarding-data-from-corruption-technology-paper-tp621us.pdf&amp;#160;&lt;a class="footnote-backref" href="#fnref:af2" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="Storage"></category></entry><entry><title>Scrub your NAS hard drives regularly if you care about your data</title><link href="https://louwrentius.com/scrub-your-nas-hard-drives-regularly-if-you-care-about-your-data.html" rel="alternate"></link><published>2020-04-22T12:00:00+02:00</published><updated>2020-04-22T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-04-22:/scrub-your-nas-hard-drives-regularly-if-you-care-about-your-data.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Lots of people run a NAS at home. Maybe it's a &lt;a href="https://en.wikipedia.org/wiki/Commercial_off-the-shelf"&gt;COTS&lt;/a&gt; device from one of the well-known vendors&lt;sup id="fnref:vendors"&gt;&lt;a class="footnote-ref" href="#fn:vendors"&gt;1&lt;/a&gt;&lt;/sup&gt;, or it's a custom build solution (DIY&lt;sup id="fnref:diy"&gt;&lt;a class="footnote-ref" href="#fn:diy"&gt;2&lt;/a&gt;&lt;/sup&gt;) based on hardware you bought and assembled yourself. &lt;/p&gt;
&lt;p&gt;Buying or building a NAS is one thing, but operating it in a …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Lots of people run a NAS at home. Maybe it's a &lt;a href="https://en.wikipedia.org/wiki/Commercial_off-the-shelf"&gt;COTS&lt;/a&gt; device from one of the well-known vendors&lt;sup id="fnref:vendors"&gt;&lt;a class="footnote-ref" href="#fn:vendors"&gt;1&lt;/a&gt;&lt;/sup&gt;, or it's a custom build solution (DIY&lt;sup id="fnref:diy"&gt;&lt;a class="footnote-ref" href="#fn:diy"&gt;2&lt;/a&gt;&lt;/sup&gt;) based on hardware you bought and assembled yourself. &lt;/p&gt;
&lt;p&gt;Buying or building a NAS is one thing, but operating it in a way that assures that you won't lose data is something else. &lt;/p&gt;
&lt;p&gt;Obviously, the best way to protect against dataloss, is to make regular backups. So ideally, even if the NAS would go up in flames, you would still have your data. &lt;/p&gt;
&lt;p&gt;Since backup storage costs money, people make tradeoffs. They may decide to take the risk and only backup a small portion of the really important data and &lt;a href="https://www.reddit.com/r/DataHoarder/comments/g4wfke/moment_of_silence_for_1_petabyte_data_loss/?utm_source=share&amp;amp;utm_medium=web2x"&gt;take their chances&lt;/a&gt; with the rest. &lt;/p&gt;
&lt;p&gt;Well that is their own right. But still, it would be nice if we would reduce the risk of dataloss to a minimum.&lt;/p&gt;
&lt;h2&gt;The risk: bad sectors&lt;/h2&gt;
&lt;p&gt;The problem is that hard drives may develop &lt;em&gt;bad sectors&lt;/em&gt; over time. Bad sectors are tiny portions of the drive that have become unreadable&lt;sup id="fnref:ure"&gt;&lt;a class="footnote-ref" href="#fn:ure"&gt;3&lt;/a&gt;&lt;/sup&gt;. How small a sector may be, if any data is stored in them, it is now lost and this could cause data corruption (one or more corrupt files).&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This is the thing:&lt;/em&gt; those bad sectors may never be discovered until it is too late!&lt;/p&gt;
&lt;p&gt;With todays 14+ TB hard drives, it's easy to store vast amounts of data. Most of that data is probably not frequently accessed, especially at home. &lt;/p&gt;
&lt;p&gt;One or more of your hard drives may be developing bad sectors and you wouldn't even know it. How would you? &lt;/p&gt;
&lt;p&gt;Your data might be at risk right at this moment while you are reading this article. &lt;/p&gt;
&lt;p&gt;A well-known disaster scenario in which people tend to lose data is double hard drive failure where only one drive faillure can be tolerated (RAID 1 (mirror) or RAID 5, and in &lt;em&gt;some&lt;/em&gt; scenario's RAID 10).&lt;/p&gt;
&lt;p&gt;In this scenario, a hard drive in their RAID array has failed and a second drive (one of the remaining good drives) has developed bad sectors. That means effectively a second drive has failed although the drive may still seem operational. Due to the bad sectors, data required to rebuild the array is lost because there is no longer any redundancy&lt;sup id="fnref:zfs"&gt;&lt;a class="footnote-ref" href="#fn:zfs"&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;If you run a (variant of) RAID 5, you can only lose a single disk, so if a second disk fails, you lose all data&lt;sup id="fnref:raid6"&gt;&lt;a class="footnote-ref" href="#fn:raid6"&gt;5&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;h2&gt;The mitigation: periodic scrubbing / checking of your disks&lt;/h2&gt;
&lt;p&gt;The only way to find out if a disk has developed bad sectors is to just read them all. Yes: &lt;em&gt;all the sectors&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Checking your hard drives for bad sectors (or other issues) is called 'data scrubbing'. If you bought a NAS from QNAP, Synology or another vendor, there is a menu which allows you to control how often and when you want to perform a data scrub. &lt;/p&gt;
&lt;p&gt;RAID solutions are perfectly capable of handling bad sectors. For a RAID array, it's just equivalent to a failed drive and an affected drive will be kicked out of the RAID array if bad sectors start causing read errors. The big issue we want to prevent is that multiple drives start to develop bad sectors at the same time, because that is the equivalent of multiple simultaneous drive failures, which many RAID arrays can't recover from.&lt;/p&gt;
&lt;p&gt;For home users I would recommend checking all hard drives once a month. I would recommend configuring the data scrub to run at night (often the default) because a scrub may impact performance in a way that can be noticeable and even inconvenient. &lt;/p&gt;
&lt;p&gt;Your vendor may have already configured a default schedule for data scrubs, so you may have been protected all along. If you take a look, at least you know.&lt;/p&gt;
&lt;p&gt;People who have built a DIY NAS have to setup and configure periodic scrubs themselves or they won't happen at all. However, that's not entirely true: I've noticed that on Ubuntu, all Linux software RAID arrays (MDADM) are checked once a month at night. So if you use Linux software RAID you may already be scrubbing.&lt;/p&gt;
&lt;p&gt;A drive that develops bad sectors should be replaced as soon as possible. It should no longer be trusted. The goal of scrubbing is to identify these drives as soon as possible. You don't want to get in a position that multiple drives have started developing bad sectors. You can only prevent that risk by scanning for bad sectors periodically and replacing bad drives. &lt;/p&gt;
&lt;p&gt;You should not be afraid about having to spend a ton of money replacing drives all the time. Bad sectors are not &lt;em&gt;that&lt;/em&gt; common. But they are a common enough that you should check for them. There is a reason why NAS vendors offer the option to run data scrubs and recommend them&lt;sup id="fnref:enterprise"&gt;&lt;a class="footnote-ref" href="#fn:enterprise"&gt;6&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;h2&gt;You probably forgot to configure email alerting&lt;/h2&gt;
&lt;p&gt;If a disk in your NAS would fail, how would you know? If the scrub would discover bad sectors, would you ever notice&lt;sup id="fnref:smallc"&gt;&lt;a class="footnote-ref" href="#fn:smallc"&gt;7&lt;/a&gt;&lt;/sup&gt;?&lt;/p&gt;
&lt;p&gt;The answer may be: only when it's too late. Maybe a drive already failed and you haven't even noticed yet!&lt;/p&gt;
&lt;p&gt;When you've finished reading this article, it may be the right moment to take some time to check the status of your NAS and &lt;em&gt;configure email alerting&lt;/em&gt; (or any other alerting mechanism that works for you). Make your NAS sends out a test message just to confirm it actually works!&lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;So I would like to advice you to do two things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Make sure your NAS runs a data scrub once a month&lt;/li&gt;
&lt;li&gt;Make sure your NAS is able to email alerts about failed disks or scrubs.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These actions allow you to fix problems before they become catastrophic. &lt;/p&gt;
&lt;h2&gt;P.S. S.M.A.R.T. monitoring&lt;/h2&gt;
&lt;p&gt;Hard drives have a build-in monitoring system called &lt;a href="https://en.wikipedia.org/wiki/S.M.A.R.T."&gt;S.M.A.R.T.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you have a NAS from one of the NAS vendors, they will allert on SMART monitoring information that would indicate that a drive is failing. DIY builders may have to spend time setting up this kind of monitoring manually. &lt;/p&gt;
&lt;p&gt;For more information about SMART I would recommend [this][this article] and &lt;a href="https://harddrivegeek.com/current-pending-sector-count/"&gt;this one&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://harddrivegeek.com/current-pending-sector-count/"&gt;this article&lt;/a&gt; and also &lt;a href="https://harddrivegeek.com/reallocated-sector-count/"&gt;this one&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Linux users can take a look at the SMART status of their hard drives with &lt;a href="https://github.com/louwrentius/showtools"&gt;this tool&lt;/a&gt; (which I made).&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:vendors"&gt;
&lt;p&gt;QNAP, Synology, Netgear, Buffalo, Thecus, Western Digital, and so on.&amp;#160;&lt;a class="footnote-backref" href="#fnref:vendors" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:diy"&gt;
&lt;p&gt;FreeNAS, Unraid, Windows/Linux with Snapraid, OpenMediaVault, or a custom solution, and so on.&amp;#160;&lt;a class="footnote-backref" href="#fnref:diy" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:ure"&gt;
&lt;p&gt;Bad sectors cause 'unrecoverable read errors' or UREs. Bad sectors have nothing to do with 'silent data corruption'. There's nothing silent about unrecoverable read errors. Hard drives report read errors back to the operating system, they won't go unnoticed.&amp;#160;&lt;a class="footnote-backref" href="#fnref:ure" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:zfs"&gt;
&lt;p&gt;A DIY NAS based on ZFS (FreeNAS is based on ZFS) may help mitigate the impact of such an event. ZFS can continue reading data from the remaining drives, even if bad sectors are encountered. Some files will be corrupted, but most of the data would still be readable. I think this capability is by itself not enough reason to pick a NAS based on ZFS because ZFS also has a cost involved that you need to accept too. For my &lt;a href="https://louwrentius.com/71-tib-diy-nas-based-on-zfs-on-linux.html"&gt;large NAS&lt;/a&gt; I have chosen ZFS because I was prepared to 'pay the cost'.&amp;#160;&lt;a class="footnote-backref" href="#fnref:zfs" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:raid6"&gt;
&lt;p&gt;Some people may chose to go with RAID 6 which tolerates two simultaneous drive failures but they also tend to run larger arrays with more drives, which also increases the risk of drive failure or one of the drives developing bad sectors.&amp;#160;&lt;a class="footnote-backref" href="#fnref:raid6" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:enterprise"&gt;
&lt;p&gt;Enterprise storage solutions (Even entry level storage arrays) often run patrol reads both on individual hard drives and also the RAID arrays on top of them. They are also enabled by default.&amp;#160;&lt;a class="footnote-backref" href="#fnref:enterprise" title="Jump back to footnote 6 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:smallc"&gt;
&lt;p&gt;At one time I worked for a small company that ran their own (single) email server. One of the system administrators discovered totally by accident that one of the two drives in a RAID 1 had failed. It turns out we were running on a &lt;em&gt;single&lt;/em&gt; drive for months before we discovered it, because we forgot to setup email alerting. We didn't lose data, but we came close.&amp;#160;&lt;a class="footnote-backref" href="#fnref:smallc" title="Jump back to footnote 7 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="Storage"></category></entry><entry><title>Benchmarking storage with Fio and generating charts of the results</title><link href="https://louwrentius.com/benchmarking-storage-with-fio-and-generating-charts-of-the-results.html" rel="alternate"></link><published>2020-04-21T12:00:00+02:00</published><updated>2020-04-21T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-04-21:/benchmarking-storage-with-fio-and-generating-charts-of-the-results.html</id><summary type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/axboe/fio"&gt;Fio&lt;/a&gt; is a widely-used tool for performing storage benchmarks. Fio offers a lot of options to create a storage benchmark that would best reflect your needs. Fio allows you to assess if your storage solution is up to its task and how much headroom it has. &lt;/p&gt;
&lt;p&gt;Fio outputs &lt;em&gt;.json …&lt;/em&gt;&lt;/p&gt;</summary><content type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/axboe/fio"&gt;Fio&lt;/a&gt; is a widely-used tool for performing storage benchmarks. Fio offers a lot of options to create a storage benchmark that would best reflect your needs. Fio allows you to assess if your storage solution is up to its task and how much headroom it has. &lt;/p&gt;
&lt;p&gt;Fio outputs &lt;em&gt;.json&lt;/em&gt; and &lt;em&gt;.log&lt;/em&gt; files that need further processing if you would like to make nice charts. Charts may help better communicate your test results to other people. &lt;/p&gt;
&lt;p&gt;To make graphs of Fio benchmark data, I've created &lt;a href="https://github.com/louwrentius/fio-plot"&gt;fio-plot&lt;/a&gt;. With fio-plot you can generate charts like:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/comparingraid10raid5new.png"&gt;&lt;img alt="example1" src="https://louwrentius.com/static/images/comparingraid10raid5new.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/3d_RAID5_10K_NOBITMAP.png"&gt;&lt;img alt="example2" src="https://louwrentius.com/static/images/3d_RAID5_10K_NOBITMAP.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/impactofqueuedepth.png"&gt;&lt;img alt="example3" src="https://louwrentius.com/static/images/impactofqueuedepth.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It's very common that you want to run multiple benchmarks with different parameters to compare results. To generate the data of the charts, many benchmarks need to be run. This process needs to be automated.&lt;/p&gt;
&lt;h3&gt;Automating Fio benchmarks&lt;/h3&gt;
&lt;p&gt;I've chosen to build my own tool to automate Fio benchmarking. This tool is called &lt;a href="https://github.com/louwrentius/fio-plot/tree/master/benchmark_script"&gt;bench_fio&lt;/a&gt; and is part of &lt;a href="https://github.com/louwrentius/fio-plot"&gt;fio-plot&lt;/a&gt;. I'm aware that - as part of fio - a tool called &lt;a href="https://github.com/axboe/fio/blob/master/tools/genfio"&gt;genfio&lt;/a&gt; is provided, to generate fio job files with multiple benchmarks. It's up to you what you want to use. Bench-fio is tailored to output data in a way that aligns with fio-plot.&lt;/p&gt;
&lt;p&gt;Bench-fio allows you to benchmark loads with different iodepths, simultaneous jobs, block sizes and other parameters. A benchmark run can consist of hundreds of tests and take many hours. &lt;/p&gt;
&lt;p&gt;When you run bench_fio, you can expect output like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;████████████████████████████████████████████████████
    +++ Fio Benchmark Script +++

Job template:                  fio-job-template.fio
I/O Engine:                    libaio
Number of benchmarks:          98
Estimated duration:            1:38:00
Devices to be tested:          /dev/md0
Test mode (read/write):        randrw
IOdepth to be tested:          1 2 4 8 16 32 64
NumJobs to be tested:          1 2 4 8 16 32 64
Blocksize(s) to be tested:     4k
Time per test (s):             60
Mixed workload (% Read):       75 90

████████████████████████████████████████████████████
4% |█                        | - [0:04:02, 1:35:00]-]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Bench-fio runs real-time and shows the expected remaining time. It also shows all relevant parameters that have been configured for this benchmark run. This makes it easier to spot any mis-configurations.&lt;/p&gt;
&lt;p&gt;Notice that this benchmark consists of 98 individual tests: iodepth x NumJobs x Mixed Workload parameters (7 x 7 x 2). With a standard of 60 seconds per benchmark &lt;/p&gt;
&lt;p&gt;This is an example of the command-line syntax: 
    :::text 
    ./bench_fio --target /dev/md0 -t device --mode randrw -o RAID_ARRAY --readmix 75 90&lt;/p&gt;
&lt;p&gt;More examples can be found &lt;a href="https://github.com/louwrentius/fio-plot/blob/master/benchmark_script/README.md"&gt;here&lt;/a&gt;.&lt;/p&gt;</content><category term="Storage"></category><category term="Fio"></category></entry><entry><title>My home network setup based on managed switches and VLANs</title><link href="https://louwrentius.com/my-home-network-setup-based-on-managed-switches-and-vlans.html" rel="alternate"></link><published>2020-04-10T12:00:00+02:00</published><updated>2020-04-10T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-04-10:/my-home-network-setup-based-on-managed-switches-and-vlans.html</id><summary type="html">&lt;h2&gt;My home networking setup&lt;/h2&gt;
&lt;p&gt;I live in a two story apartment, with on the top floor my utilities closet and my living room. The bottom floor contains a bedroom with all my servers and networking gear. &lt;/p&gt;
&lt;p&gt;So this is my setup (click for a bigger version):&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/homenetworkvlan.png"&gt;&lt;img alt="home" src="https://louwrentius.com/static/images/homenetworkvlan.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I like to run …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;My home networking setup&lt;/h2&gt;
&lt;p&gt;I live in a two story apartment, with on the top floor my utilities closet and my living room. The bottom floor contains a bedroom with all my servers and networking gear. &lt;/p&gt;
&lt;p&gt;So this is my setup (click for a bigger version):&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/homenetworkvlan.png"&gt;&lt;img alt="home" src="https://louwrentius.com/static/images/homenetworkvlan.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I like to run my own router but my utilities closed is not the safest place in terms of security and climate. &lt;/p&gt;
&lt;p&gt;By default, most people who run their own home router will use a box with two network interfaces, one connected to the (cable) modem and the other one connected to the home network. &lt;/p&gt;
&lt;p&gt;I could have done the same thing, by running a cable from the modem to my router, and a second cable back up towards my closet (and livingroom). &lt;/p&gt;
&lt;p&gt;However, I didn't want to run multiple cables from my utilities closed to my bedroom downstairs, I saw no need for that: because I can use VLANs. &lt;/p&gt;
&lt;p&gt;The small 8-port switch in the closed is connected with a single (long) cable to the 24-port switch I have in my bedroom downstairs. This switch connects to my router and multiple servers. &lt;/p&gt;
&lt;p&gt;I've setup a trunk between these two switches where my internet traffic flows over 'VLAN 100' and my home network uses 'VLAN 200'. &lt;/p&gt;
&lt;p&gt;The router, an &lt;a href="https://n40l.fandom.com/wiki/HP_MicroServer_N40L_Wiki"&gt;HP N40L&lt;/a&gt;, has only a single network interface. I just expose the two VLANS as 'tagged' and let the router route traffic between the two VLANS. No need for a second interface (as many home setups do).&lt;/p&gt;
&lt;p&gt;So in my setup there are two trunks, one between the two switches and the other one between the bedroom switch and my router. All other devices are connected to untagged network ports, in their appropriate VLAN.&lt;/p&gt;
&lt;p&gt;The small switch in the closet is responsible for carrying my home network to the switch in my living room. &lt;/p&gt;
&lt;p&gt;The raspberry pi connects to my smart meter to collect information about my power and gas usage. &lt;/p&gt;</content><category term="Networking"></category><category term="Networking"></category></entry><entry><title>The impact of the MDADM bitmap on RAID performance</title><link href="https://louwrentius.com/the-impact-of-the-mdadm-bitmap-on-raid-performance.html" rel="alternate"></link><published>2020-04-06T12:00:00+02:00</published><updated>2020-04-06T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-04-06:/the-impact-of-the-mdadm-bitmap-on-raid-performance.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I'm aware that most people with intensive storage workloads won't run those workloads on hard drives anymore, that ship has sailed a long time ago. SSDs have taken their place (or 'the cloud').&lt;/p&gt;
&lt;p&gt;For those few left who do use hard drives in Linux software RAID setups and run …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I'm aware that most people with intensive storage workloads won't run those workloads on hard drives anymore, that ship has sailed a long time ago. SSDs have taken their place (or 'the cloud').&lt;/p&gt;
&lt;p&gt;For those few left who do use hard drives in Linux software RAID setups and run workloads that generate a lot of random IOPS, this may still be relevant. &lt;/p&gt;
&lt;p&gt;I'm not sure how much a bitmap affects MDADM software RAID arrays based on solid state drives as I have not tested them.&lt;/p&gt;
&lt;h2&gt;The purpose of the bitmap&lt;/h2&gt;
&lt;p&gt;By default, when you create a new software RAID array with MDADM, a bitmap is also configured. The &lt;a href="https://louwrentius.com/speeding-up-linux-mdadm-raid-array-rebuild-time-using-bitmaps.html"&gt;purpose of the bitmap&lt;/a&gt; is to speed up recovery of your RAID array in case the array gets out of sync. &lt;/p&gt;
&lt;p&gt;A bitmap won't help speed up the recovery from drive failure, but the RAID array can get out of sync due to a hard reset or power failure during write operations.&lt;/p&gt;
&lt;h2&gt;The performance impact&lt;/h2&gt;
&lt;p&gt;During some benchmarking of various RAID arrays, I noticed very bad &lt;em&gt;random write&lt;/em&gt; IOPS performance. No matter what the test conditions were, I got the random write performance of a single drive, although the RAID array should perform better. &lt;/p&gt;
&lt;p&gt;Then I noticed that the array was configured with a bitmap. Just for testing purposes, I removed the bitmap all together with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mdadm --grow --bitmap=none /dev/md0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Random write IOPs figures improved immediately. &lt;a href="http://manpages.ubuntu.com/manpages/eoan/en/man8/mdadm.8.html"&gt;This resource&lt;/a&gt; explains why:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;If  the  word internal is given, then the bitmap is stored with the metadata
on the array, and so is replicated on all devices.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So when you write data to our RAID array, the bitmap is also constantly updated. Since that bitmap lives on each drive in the array, it's probably obvious that this really deteriorates random write IOPS.&lt;/p&gt;
&lt;h2&gt;Some examples of the performance impact&lt;/h2&gt;
&lt;h3&gt;Bitmap disabled&lt;/h3&gt;
&lt;p&gt;An example of a RAID 5 array with 8 x 7200 RPM drives. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/3d_RAID5_NOBITMAP.png"&gt;&lt;img alt="nobitmap" src="https://louwrentius.com/static/images/3d_RAID5_NOBITMAP.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Another example with 10.000 RPM drives:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/3d_RAID5_10K_NOBITMAP.png"&gt;&lt;img alt="10knobitmap" src="https://louwrentius.com/static/images/3d_RAID5_10K_NOBITMAP.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Bitmap enabled (internal)&lt;/h3&gt;
&lt;p&gt;We observe significant lower random write IOPs performance overall: &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/3d_RAID5_WITHBITMAP.png"&gt;&lt;img alt="bitmapenabled" src="https://louwrentius.com/static/images/3d_RAID5_WITHBITMAP.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Which is also true for 10.000 RPM drives.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/3d_RAID5_10K_BITMAP.png"&gt;&lt;img alt="10kbitmap" src="https://louwrentius.com/static/images/3d_RAID5_10K_BITMAP.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;External bitmap&lt;/h2&gt;
&lt;p&gt;You could keep the bitmap and still get great random write IOPS by putting the bitmap on a separate SSD. Since my boot device is an SSD, I tested this option like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mdadm --grow --bitmap=/raidbitmap /dev/md0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I noticed excellent random write IOPS &lt;em&gt;with&lt;/em&gt; this external bitmap, similar to running without a bitmap at all. An external bitmap has it's own risks and caveats, so make sure it really fits your needs.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Note: external bitmaps are only known to work on ext2  and  ext3. 
Storing bitmap files on other filesystems may result in serious problems.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;For home users who build DIY NAS servers and who do run MDADM RAID arrays, I would recommend leaving the bitmap &lt;em&gt;enabled&lt;/em&gt;. The impact on sequential file transfers is negligible and the benefit of a quick RAID resync is very obvious.&lt;/p&gt;
&lt;p&gt;Only if you have a workload that would cause a ton of random writes on your storage server would I consider disabling the bitmap. An example of such a use case would be running virtual machines with a heavy write workload.&lt;/p&gt;
&lt;h2&gt;Update on bitmap-chunks&lt;/h2&gt;
&lt;p&gt;Based on feedback in the comments, I've performed a benchmark on a new RAID 5 array setting the --bitmap-chunk option to 128M (Default is 64M). &lt;/p&gt;
&lt;p&gt;The results seem to be significantly &lt;em&gt;worse&lt;/em&gt; than the default for random write IOPS performance. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/3D_RAID5_BITMAPCHUNK_128M.png"&gt;&lt;img alt="bitmapenabled128" src="https://louwrentius.com/static/images/3D_RAID5_BITMAPCHUNK_128M.png" /&gt;&lt;/a&gt;&lt;/p&gt;</content><category term="Storage"></category><category term="mdadm"></category></entry><entry><title>Cryptocurrencies are detrimental to society</title><link href="https://louwrentius.com/cryptocurrencies-are-detrimental-to-society.html" rel="alternate"></link><published>2020-03-27T12:00:00+01:00</published><updated>2020-03-27T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-03-27:/cryptocurrencies-are-detrimental-to-society.html</id><summary type="html">&lt;p&gt;Want to listen to this article?&lt;/p&gt;
&lt;figure&gt;
    &lt;audio
        controls
        src="https://louwrentius.com/files/audio/20190327_cryptocurrencies.mp3"&gt;
            Your browser does not support the
            &lt;code&gt;audio&lt;/code&gt; element.
    &lt;/audio&gt;
&lt;/figure&gt;

&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;How would you explain the inner workings of bitcoin to a person in simple, understandable terms? &lt;/p&gt;
&lt;p&gt;&lt;a href="https://twitter.com/am_anatiala/status/1030223326826950656"&gt;&lt;img alt="twitterimage" src="https://louwrentius.com/static/images/sudokusforheroin.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://twitter.com/am_anatiala/status/1030223326826950656"&gt;source&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This explanation seems perfect to me because it illustrates some seriously problematic aspects of cryptocurrencies in one simple …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Want to listen to this article?&lt;/p&gt;
&lt;figure&gt;
    &lt;audio
        controls
        src="https://louwrentius.com/files/audio/20190327_cryptocurrencies.mp3"&gt;
            Your browser does not support the
            &lt;code&gt;audio&lt;/code&gt; element.
    &lt;/audio&gt;
&lt;/figure&gt;

&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;How would you explain the inner workings of bitcoin to a person in simple, understandable terms? &lt;/p&gt;
&lt;p&gt;&lt;a href="https://twitter.com/am_anatiala/status/1030223326826950656"&gt;&lt;img alt="twitterimage" src="https://louwrentius.com/static/images/sudokusforheroin.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://twitter.com/am_anatiala/status/1030223326826950656"&gt;source&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This explanation seems perfect to me because it illustrates some seriously problematic aspects of cryptocurrencies in one simple sentence.&lt;/p&gt;
&lt;p&gt;It captures the unimaginable &lt;em&gt;energy waste&lt;/em&gt; of mining cryptocurrencies. And it also captures the dark side of cryptocurrencies: &lt;em&gt;facilitating crime&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;At this point cryptocurrency enthusiasts are rolling their eyes and sigh. This point has been made many times over. I know, I'm definitely not the first to criticise cryptocurrencies&lt;sup id="fnref:fn1"&gt;&lt;a class="footnote-ref" href="#fn:fn1"&gt;1&lt;/a&gt;&lt;/sup&gt; this way.&lt;/p&gt;
&lt;p&gt;I do have a simple challenge though:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Can you please show me the benefit to society of cryptocurrencies?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Please, don't come up with theoretical or future possibilities. Cryptocurrencies have existed for &lt;em&gt;eleven years&lt;/em&gt;, they should have something to show for &lt;em&gt;right now&lt;/em&gt;. After many hours of reading up on the topic, I have not been able to find any tangible benefits that would justify the effort and resources spend on them.&lt;/p&gt;
&lt;p&gt;The downsides of cryptocurrencies are an entirely different matter. They are very, very clear to me. But let's not go there directly. What fun would that be?&lt;/p&gt;
&lt;h2&gt;The solution in search of a problem&lt;/h2&gt;
&lt;p&gt;As I see it, cryptocurrencies are entangled in a desparate search for a problem to solve. They are the answer to a question nobody asked&lt;sup id="fnref:asked"&gt;&lt;a class="footnote-ref" href="#fn:asked"&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Many cryptocurrency advocates see the decentralised (distributed) nature of cryptocurrencies as a tangible benefit. Cryptocurrencies are most often not controlled by any government or single entity.&lt;/p&gt;
&lt;p&gt;Aside from whether this is true in practice&lt;sup id="fnref:aside"&gt;&lt;a class="footnote-ref" href="#fn:aside"&gt;3&lt;/a&gt;&lt;/sup&gt;, they seem to imply that it's a bad thing that governments control their own currencies. Well, last time I checked governments control their currencies to keep them as stable as possible. Frankly, that actually sounds like exactly what I would want from a currency. &lt;/p&gt;
&lt;p&gt;Stability.&lt;/p&gt;
&lt;p&gt;You can call cryptocurrencies many things but 'stable' is definitely not one of them. Cryptocurrencies are highly volatile. In cryptocoin, a loaf of bread could suddenly cost twice as much as the day before.&lt;/p&gt;
&lt;p&gt;Although cryptocurrencies haven't seen any significant adoption as a payment method - due to their volatility - it has seen adoption in 'less stable countries' where it is basically the &lt;a href="https://bitcoinist.com/bitcoin-rescue-economically-unstable-countries/"&gt;'lesser of two evils'&lt;/a&gt;. I mean: you know things are bad if a volatile cryptocurrency is a safer option than the native currency of your country.&lt;/p&gt;
&lt;p&gt;The argument in the end boils down to: if your society is already &lt;em&gt;in serious trouble&lt;/em&gt;, maybe cryptocurrencies could provide 'some benefit'. If the people involved have reliable access to internet. And internet access is most often controlled by the government.&lt;/p&gt;
&lt;p&gt;To my knowledge, cryptocurrency as a payment method has actually only seen true adoption within the world of dark markets such as &lt;a href="https://en.wikipedia.org/wiki/Silk_Road_(marketplace)"&gt;Silk Road&lt;/a&gt; in wich bitcoin rose to prominence. Nonetheless, after eleven years, cryptocurrencies have no traction in the regular 'legitimate' markets as a real payment method. &lt;/p&gt;
&lt;p&gt;The reason why is obvious: existing payment methods are much easier to use and feel much safer. And the volatility of cryptocurrencies only compounds to the support of these conventional methods.&lt;/p&gt;
&lt;p&gt;If cryptocurrencies are a solution to anything at all they seem to be 'bad' solutions at best.&lt;/p&gt;
&lt;h2&gt;Are cryptocurrencies in fact a Ponzi or Piramid scheme?&lt;/h2&gt;
&lt;p&gt;It depends on the particular currency, but I think the &lt;a href="https://www.counterpunch.org/2018/02/09/is-cryptocurrency-a-ponzi-scheme/"&gt;case can be made for sure&lt;/a&gt;. I mean: why not both?&lt;/p&gt;
&lt;p&gt;Cryptocurrencies &lt;a href="https://www.forbes.com/sites/jayadkisson/2018/04/14/bitcoin-and-cryptocurrency-unsuitable-at-any-speed/"&gt;have no intrinsic value&lt;/a&gt;. They are only worth what people are willing to pay for them. So the cryptocurrency advocates needed to drum up demand, to create a market where previously none existed. This resulted in wild visions of the future, elaborate jargon-filled smokescreens that argue cryptocurrencies would take over the world. Get in quickly or you miss out!&lt;/p&gt;
&lt;p&gt;And so many people were afraid to miss out, creating an enormous cryptocurrency hype, starting in November of 2017, spilling into 2018 when the bubble burst. The end result? A few people got very rich and the vast majority lost money.&lt;/p&gt;
&lt;p&gt;It seems to me this all is a combination of a &lt;a href="https://en.wikipedia.org/wiki/Ponzi_scheme"&gt;ponzi scheme&lt;/a&gt; with the component of active recruitment found in &lt;a href="https://en.wikipedia.org/wiki/Pyramid_scheme"&gt;piramid schemes&lt;/a&gt;. The value of the currencies must come from somewhere, right?&lt;/p&gt;
&lt;p&gt;So please tell me how all of this benefits our society? Creating a handful of rich people at the expense of a lot of other people? &lt;a href="https://qz.com/1217460/cryptocurrency-is-a-giant-multi-level-marketing-scheme/"&gt;Is that it&lt;/a&gt;? &lt;/p&gt;
&lt;p&gt;The graveyard of &lt;a href="https://deadcoins.com"&gt;dead cryptocurrencies&lt;/a&gt; only shows how many people or startups try to get a piece of the action. And so many of them are outright scams. Are in fact all cryptocurrencies scams at their core?&lt;/p&gt;
&lt;h2&gt;The downsides of cryptocurrencies&lt;/h2&gt;
&lt;p&gt;I hope I have established that cryptocurrencies provide no tangible benefits to society. But the do have a lot of downsides. I observe the following: &lt;/p&gt;
&lt;table border="1" cellpadding="7" cellspacing="2" vertical-align="top"&gt;
&lt;tr&gt;&lt;th width=280px&gt;Topic&lt;/th&gt;&lt;th&gt;Remark&lt;/th&gt;
&lt;tr&gt;&lt;td&gt;Trafficing of illegal goods&lt;/td&gt;&lt;td&gt;Drugs, weapons, childpornography, and so on.&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Trafficing of illegal services&lt;/td&gt;&lt;td&gt;Murder for hire&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Tax evasion&lt;/td&gt;&lt;td&gt;-&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Money laundering&lt;/td&gt;&lt;td&gt;-&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Ransomware&lt;/td&gt;&lt;td&gt;Hold data hostage in encrypted form&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Polution / energy waste&lt;/td&gt;&lt;td&gt;crypto miners use a lot of electricity&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Lack of Security&lt;/td&gt;&lt;td&gt;several cryptocurrency exchanges have been hacked&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Crypto scams&lt;/td&gt;&lt;td&gt;New cryptocoins are created just to scam people&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;

&lt;p&gt;The easy retort to this table is: "'normal' currencies like the dollar or euro also facilitate almost all of those illegal things", which is true but it misses the larger point. &lt;/p&gt;
&lt;p&gt;Those regular currencies provide tremendous value to our societies. Our societies are build upon them and they facilitate almost everything we do. The topics listed in the table are just a possible negative side-effect for regular currencies. Their clear benefits outweighs such downsides.&lt;/p&gt;
&lt;p&gt;Cryptocurrencies don't seem to have any such upsides. They seem to be made to exclusively facilitate cryptocurrency speculation and crime.&lt;/p&gt;
&lt;h2&gt;Cryptocurrencies facilitate crime&lt;/h2&gt;
&lt;p&gt;I won't discuss all the topics in the previous table but I do want to highlight a few.&lt;/p&gt;
&lt;h3&gt;Dark markets&lt;/h3&gt;
&lt;p&gt;I think we all remember &lt;a href="https://en.wikipedia.org/wiki/Silk_Road_(marketplace)"&gt;Silk Road&lt;/a&gt;, a now defunct &lt;a href="https://en.wikipedia.org/wiki/Darknet_market"&gt;darknet marketplace&lt;/a&gt; that allowed people to anonymously buy - amongst other things - drugs and guns&lt;sup id="fnref:snfn"&gt;&lt;a class="footnote-ref" href="#fn:snfn"&gt;4&lt;/a&gt;&lt;/sup&gt;.
Silk Road was the first large-scale application of bitcoin as a means of payment.&lt;/p&gt;
&lt;p&gt;Silk Road started out with just drugs, but guns soon followed. This deeply depraved world of dark markets &lt;a href="https://www.nytimes.com/2020/01/28/technology/bitcoin-black-market.html"&gt;are very much enabled&lt;/a&gt; by cryptocurrencies because the parties involved in a transaction are so hard to identify.&lt;/p&gt;
&lt;h3&gt;Ransomware&lt;/h3&gt;
&lt;p&gt;Ransomware seems to be almost exclusively enabled by digital currencies. Not explicitly just cryptocurrencies, but they do enable this type of crime because tracing the payments back to the criminal is so difficult.&lt;/p&gt;
&lt;p&gt;The damage caused by ransomware is so obviously devastating. You can have many opinions on the fact that many critical organisations such as hospitals or universities don't have their computer security under control. &lt;/p&gt;
&lt;p&gt;The real problem is that cryptocurrencies make these kinds of attacks on businesses and institutions very low-risk and highly profitable. In my own country a &lt;a href="https://www.dutchnews.nl/news/2020/01/maastricht-university-paid-hackers-to-get-back-system-access/"&gt;university&lt;/a&gt; was targeted by such an attack and allegedly they paid the ransom. The disruption to its services was substantial.&lt;/p&gt;
&lt;h2&gt;Energy waste&lt;/h2&gt;
&lt;p&gt;As cryptocurrencies rose in value, it started to become profitable to 'mine' them. It started out with regular computers, but soon, we could use videocards to accelerate cryptocurrency mining, which are very power hungry. &lt;/p&gt;
&lt;p&gt;Later on, FPGAs and ASICS were build to further accelerate mining performance. Entire companies spun up to build those miners and host them in large datacenters with cheap electricity. The scale of the operation is rather enormous. &lt;/p&gt;
&lt;p&gt;According to an &lt;a href="https://www.forbes.com/sites/niallmccarthy/2019/07/08/bitcoin-devours-more-electricity-than-switzerland-infographic/"&gt;article dating to July 2019&lt;/a&gt;, just bitcoin mining consumes more electricity than Switzerland. The article links to an &lt;a href="https://www.cbeci.org"&gt;online tool&lt;/a&gt; that tracks this power usage in real-time based on some estimates. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://commons.wikimedia.org/wiki/File:Cryptocurrency_Mining_Farm.jpg"&gt;&lt;img alt="mf" src="https://louwrentius.com/static/images/miningfarm.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;a href="https://commons.wikimedia.org/wiki/File:Cryptocurrency_Mining_Farm.jpg"&gt;Image source&lt;/a&gt;&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;It's just mind boggling to me that so much energy is wasted, so much pressure is put on the environment, for absolutely no clear benefit at all. &lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;So in short, I think that cryptocurrencies provide nothing of value to society. They do however facilitate crime and contribute to climate change.&lt;/p&gt;
&lt;p&gt;Therefore, I would propose to shut them all down.&lt;/p&gt;
&lt;p&gt;The complex technology behind the cryptocurrencies attracted a lot of otherwise smart people and I think it's a sad thing to see their efforts going to waste or have a negative impact. &lt;/p&gt;
&lt;p&gt;People are not obliged to work on something valuable, but at least may I ask that they choose to work on something that won't harm our society?&lt;/p&gt;
&lt;p&gt;Link to &lt;a href="https://news.ycombinator.com/item?id=22703081"&gt;hackernews&lt;/a&gt;, where this post was quickly flagged down. The few comments that exist don't seem to really provide any answers to the question I pose.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:fn1"&gt;
&lt;p&gt;I would definitely recommend reading this &lt;a href="https://www.newyorker.com/magazine/2018/10/22/the-prophets-of-cryptocurrency-survey-the-boom-and-bust"&gt;long-form-article&lt;/a&gt; by The New York Times.&amp;#160;&lt;a class="footnote-backref" href="#fnref:fn1" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:asked"&gt;
&lt;p&gt;Unless you want to embark on a path of criminal activity.&amp;#160;&lt;a class="footnote-backref" href="#fnref:asked" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:aside"&gt;
&lt;p&gt;As mining is no longer profitable for the larger community, the miners become a small concentrated group of entities controlling the currency, making the currencies more centralised. Furthermore, the cryptocurrency exchanges where you can convert the cryptocurrency into regular money, are centralised institutions backed by for-profit companies. And those companies have to abide by the law. They are under the &lt;a href="https://www.loc.gov/law/help/cryptocurrency/world-survey.php"&gt;influence&lt;/a&gt; of the government.&amp;#160;&lt;a class="footnote-backref" href="#fnref:aside" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:snfn"&gt;
&lt;p&gt;If you want to know more about what happend to Silk Road and it's founder - 'the Dread Pirate Roberts', I would recommend the book &lt;a href="http://www.americankingpin.com/"&gt;'American Kingpin'&lt;/a&gt; by Nick Bilton. (no affiliate links)&amp;#160;&lt;a class="footnote-backref" href="#fnref:snfn" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Uncategorized"></category><category term="None"></category></entry><entry><title>Understanding Storage Performance - IOPS and Latency</title><link href="https://louwrentius.com/understanding-storage-performance-iops-and-latency.html" rel="alternate"></link><published>2020-03-21T12:00:00+01:00</published><updated>2020-03-21T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-03-21:/understanding-storage-performance-iops-and-latency.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The goal of this blogpost is to help you better understand storage performance. I want to discuss some fundamentals that are true regardless of your particular needs. &lt;/p&gt;
&lt;p&gt;This will help you better reason about storage and may provide a scaffolding for further learning. &lt;/p&gt;
&lt;p&gt;If you run your applications / workloads …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The goal of this blogpost is to help you better understand storage performance. I want to discuss some fundamentals that are true regardless of your particular needs. &lt;/p&gt;
&lt;p&gt;This will help you better reason about storage and may provide a scaffolding for further learning. &lt;/p&gt;
&lt;p&gt;If you run your applications / workloads entirely in the cloud, this information may feel antiquated or irrelevant.&lt;/p&gt;
&lt;p&gt;However, since the cloud is just somebody else's compute and storage, knowledge about storage may still be relevant. Cloud providers expose storage performance metrics for you to monitor and this may help to make sense of them.&lt;/p&gt;
&lt;h2&gt;Concepts&lt;/h2&gt;
&lt;h3&gt;I/O&lt;/h3&gt;
&lt;p&gt;An I/O is a single read/write request. That I/O is issued to a storage medium (like a hard drive or solid state drive). &lt;/p&gt;
&lt;p&gt;It can be a request to read a particular file from disk. Or it can be a request to write some data to an existing file. Reading or writing a file can result in multiple I/O requests. &lt;/p&gt;
&lt;h3&gt;I/O Request Size&lt;/h3&gt;
&lt;p&gt;The I/O request has a size. The request can be small (like 1 Kilobyte) or large (several megabytes). Different application workloads will issue I/O operations with different request sizes. The I/O request size can impact latency and IOPS figures (two metrics we will discuss shortly).&lt;/p&gt;
&lt;h3&gt;IOPS&lt;/h3&gt;
&lt;p&gt;IOPS stands for I/O Operations Per Second. It is a performance metric that is used (and abused) a lot in the world of storage. It tells us how many I/O requests per second can be handled by the storage (for a particular workload). &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Warning:&lt;/em&gt; this metric is meaningless without a latency figure. We will discuss latency shortly. &lt;/p&gt;
&lt;h3&gt;Bandwidth or throughput&lt;/h3&gt;
&lt;p&gt;If you multiply the IOPS figure with the (average) I/O request size, you get the bandwidth or throughput. We state storage bandwidth mostly in Megabytes and Gigabytes per second.&lt;/p&gt;
&lt;p&gt;To give you an example: if we issue a workload of 1000 IOPS with a request size of 4 Kilobytes, we will get a throughput of 1000 x 4 KB = 4000 KB. This is about ~4 Megabytes per second.&lt;/p&gt;
&lt;h3&gt;Latency&lt;/h3&gt;
&lt;p&gt;Latency is the time it takes for the I/O request to be completed. We start our measurement from the moment the request is issued to the storage layer and stop measuring when either we get the requested data, or get confirmation that the data is stored on disk.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Latency is the single most important metric to focus on when it comes to storage performance, under most circumstances.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For hard drives, an average latency somewhere between 10 to 20 ms is considered acceptable (20 ms is the upper limit).&lt;/p&gt;
&lt;p&gt;For solid state drives, depending on the workload it should never reach higher than 1-3 ms. In most cases, workloads will experience  less than 1ms latency numbers.&lt;/p&gt;
&lt;h3&gt;IOPS and Latency&lt;/h3&gt;
&lt;p&gt;This is a very important concept to understand. &lt;strong&gt;The IOPS metric is &lt;a href="http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/"&gt;meaningless without a statement about latency&lt;/a&gt;&lt;/strong&gt;. You must understand how long each I/O operation will take because latency dictates the responsiveness of individual I/O operations. &lt;/p&gt;
&lt;p&gt;If a storage solution can reach 10,000 IOPS but only at an average latency of 50 ms that could result in very bad application performance. If we want to hit an upper latency target of 10 ms the storage solution may only be capable of 2,000 IOPS.&lt;/p&gt;
&lt;p&gt;For more details on this topic I would recommend &lt;a href="http://blog.richardelling.com/2012/03/iops-and-latency-are-not-related-hdd.html"&gt;this blog&lt;/a&gt; and &lt;a href="http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/"&gt;this blog&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Access Patterns&lt;/h3&gt;
&lt;h4&gt;Sequential access&lt;/h4&gt;
&lt;p&gt;An example of a sequential data transfer is copying a large file from one hard drive to another. A large number of sequential (often adjacent) datablocks is read from the source drive and written to another drive. Backup jobs also cause sequential access patterns. &lt;/p&gt;
&lt;p&gt;In practice this access pattern shows the highest possible throughput. &lt;/p&gt;
&lt;p&gt;Hard drives have it easy as they don't have to spend much time moving their read/write heads and can spend most time reading / writing the actual data.&lt;/p&gt;
&lt;h4&gt;Random access&lt;/h4&gt;
&lt;p&gt;I/O requests are issued in a seemingly random pattern to the storage media. The  data could be stored all over various regions on the storage media. An example of such an access pattern is a heavy utilised database server or a virtualisation host running a lot of virtual machines (all operating simultaneously).&lt;/p&gt;
&lt;p&gt;Hard drives will have to spend a lot of time moving their read/write heads and can only spend little time transferring data. Both throughput and IOPS will plummet (as compared to a sequential access pattern).  &lt;/p&gt;
&lt;p&gt;In practice, most common workloads, such as running databases or virtual machines, cause random access patterns on the storage system.&lt;/p&gt;
&lt;h3&gt;Queue depth&lt;/h3&gt;
&lt;p&gt;The queue depth is a number between 1 and ~128 that shows how many I/O requests are queued (in-flight) on average. Having a queue is beneficial as the requests in the queue can be submitted to the storage subsystem in an optimised manner and often in parallel. A queue improves performance at the cost of latency.&lt;/p&gt;
&lt;p&gt;If you have some kind of storage performance monitoring solution in place, a high queue depth could be an indication that the storage subsystem cannot handle the workload. You may also observe higher than normal latency figures. As long as latency figures are still within tolerable limits, there may be no problem.&lt;/p&gt;
&lt;h2&gt;Storage Media Performance characteristics&lt;/h2&gt;
&lt;h3&gt;Hard drives&lt;/h3&gt;
&lt;p&gt;Hard drives &lt;a href="https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics"&gt;(HDDs)&lt;/a&gt; are mechanical devices that resemble a &lt;a href="https://en.wikipedia.org/wiki/File:Portable_78_rpm_record_player.jpg"&gt;record player&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;They have an arm with a read/write head and the data is stored on (multiple) platters. 
&lt;img alt="hd01" src="https://louwrentius.com/static/images/hd01open.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Hard drives have to physically move read/write heads to fulfil read/write requests. This mechanical nature makes them relatively slow as compared to solid state drives (which we will cover shortly). &lt;/p&gt;
&lt;p&gt;Especially random access workloads cause hard drives to spend a lot of time on moving the read/write head to the right position at the right time, so less time is available for actual data transfers.&lt;/p&gt;
&lt;p&gt;The most important thing to know about hard drives is that from a performance perspective (focussing on latency) higher spindle speeds reduce the average latency. &lt;/p&gt;
&lt;table border="0" cellpadding="7" cellspacing="2"&gt;
&lt;tr&gt;&lt;th&gt;Rotational Speed (RPM)&lt;/th&gt;&lt;th&gt;Access Latency(ms)&lt;/th&gt;&lt;th&gt;IOPS&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;5400 &lt;/td&gt;&lt;td&gt;17-18 &lt;td&gt;50-60&lt;/td&gt;
&lt;tr&gt;&lt;td&gt;7200 &lt;/td&gt;&lt;td&gt;12-13 &lt;td&gt;75-85&lt;/td&gt;
&lt;tr&gt;&lt;td&gt;10,000 &lt;/td&gt;&lt;td&gt;7-8 &lt;td&gt;120-130&lt;/td&gt;
&lt;tr&gt;&lt;td&gt;15,000 &lt;/td&gt;&lt;td&gt;5-6 &lt;td&gt;150-180&lt;/td&gt;
&lt;/table&gt;

&lt;p&gt;Because the latency of individual I/O requests is lower the drives with a higher RPM, you can issue more of such requests in the same amount of time. That's why the IOPS figure also increases.&lt;/p&gt;
&lt;p&gt;Latency and IOPS of an older Western Digital Velociraptor 10,000 RPM drive:&lt;/p&gt;
&lt;p&gt;&lt;img alt="wd01" src="https://raw.githubusercontent.com/louwrentius/fio-plot-data/master/images/WD740GD_74GB_10.000_RPM_2019-11-27_204549.png" /&gt;
&lt;em&gt;Notice the latency and IOPS in the Queue Depth = 1 column.&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;&lt;a href="https://mcpmag.com/Articles/2011/05/12/How-to-Speak-SAN-ish.aspx?Page=1"&gt;Source&lt;/a&gt; used to validate my own research.&lt;/p&gt;
&lt;p&gt;Regarding sequential throughput we can state that fairly old hard drives can sustain throughputs of 100-150 megabytes per second. More modern hard drives with higher capacities can often sustain between 200 - 270 megabytes per second. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;An important note&lt;/strong&gt;: sequential transfer speeds are not constant and depend on the physical location of the data on the hard drive platters. As a drive fills up, throughput diminishes. &lt;em&gt;Throughput can drop more than fifty percent!&lt;/em&gt; &lt;sup id="fnref:throughput"&gt;&lt;a class="footnote-ref" href="#fn:throughput"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;So if you want to calculate how long it will take to transfer a particular (large) dataset, you need to take this into account. &lt;/p&gt;
&lt;h3&gt;Solid State Drives&lt;/h3&gt;
&lt;p&gt;Solid state drives (SSDs) have no moving parts, they are based on flash memory (chips). SSDs can handle I/O much faster and thus show significantly lower latency. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/ssd01.jpg"&gt;&lt;img alt="ssd001" src="https://louwrentius.com/static/images/ssd01.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Whereas we measure the average I/O latency of HDDs in milliseconds (a thousand of a second) we measure the latency of SSD I/O operations in microseconds (a millionth of a second). &lt;/p&gt;
&lt;p&gt;Because of this reduced latency per I/O request, SSDs outperform HDDs in every conceivable way. Even a cheap consumer SSD can at least sustain about 5000+ IOPS with only a 0.15 millisecond (150 microseconds) latency. That latency is about 40x better than the best latency of an enterprise 15K RPM hard drive.&lt;/p&gt;
&lt;p&gt;Solid state drives can often handle I/O requests in parallel. This means that larger queue depths with more I/O requests in flight can show significantly higher IOPS with a limited (but not insignificant) increase in latency.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/louwrentius/fio-plot-data/master/images/HPDL380H420I/RAID/MX200/randread_iodepth_2019-08-04-20%3A22%3A53_1_iops_latency.png"&gt;&lt;img alt="ssd01" src="https://raw.githubusercontent.com/louwrentius/fio-plot-data/master/images/HPDL380H420I/RAID/MX200/randread_iodepth_2019-08-04-20%3A22%3A53_1_iops_latency.png" /&gt;&lt;/a&gt;
&lt;em&gt;The random I/O performance of an older SATA consumer SSD&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;More modern enterprise SSDs show better latency and IOPS. The SATA interface seems the main bottleneck.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/louwrentius/fio-plot-data/master/images/INTEL-D3-S4610-on-IBM-M1015_2020-01-29_144451.png"&gt;&lt;img alt="ssd02" src="https://raw.githubusercontent.com/louwrentius/fio-plot-data/master/images/INTEL-D3-S4610-on-IBM-M1015_2020-01-29_144451.png" /&gt;&lt;/a&gt;
&lt;em&gt;The random I/O performance of an enterprise SATA SSD&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;SSDs perform better than HDDs across all relevant metrics except price in relation to capacity. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important note:&lt;/strong&gt;  SSDs are &lt;a href="https://en.wikipedia.org/wiki/Solid-state_drive"&gt;not well-suited for archival storage&lt;/a&gt; of data. Data is stored as charges in the chips and those charges can diminish over time. It's expected that even hard drives are better suited for offline archival purposes although the most suitable storage method would probably be &lt;a href="https://en.wikipedia.org/wiki/Linear_Tape-Open"&gt;tape&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;SSD actual performance vs advertised performance&lt;/h4&gt;
&lt;p&gt;Many SSDs are advertised with performance figures of 80,000 - 100,000 IOPS at some decent latency. Depending on the workload, you may only observe a fraction of that performance. &lt;/p&gt;
&lt;p&gt;Most of those high 80K-100K IOPS figures are obtained by benchmarking with very high queue depths (16-32). The SSD benefits from such queue depths because it can handle a lot of those I/O requests in parallel. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Please beware: if your workload doesn't fit in that pattern, you may see lower performance numbers.&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;If we take a look at the chart above of the Intel SSD we may notice how the IOPS figures only start to come close to the advertised 80K+ IOPS as the queue depth increases. It's therefore important to understand the characteristics of your own workload.&lt;/p&gt;
&lt;h3&gt;RAID&lt;/h3&gt;
&lt;p&gt;If we group several hard drives together we can create a RAID array. A RAID array is a virtual storage device that exceeds the capacity and performance of a single hard drive. This allows storage to scale within the limits of a single computer. &lt;/p&gt;
&lt;p&gt;RAID is also used (or some say primarily used) to assure availability by assuring redundancy (drive failure won't cause data loss). But for this article we focus it's performance characteristics. &lt;/p&gt;
&lt;p&gt;SSDs can achieve impressive sequential throughput speeds, of multiple gigabytes per second. Individual hard drives can never come close to those speeds, but if you put a lot of them together in a RAID array, you can come very close. For instance, &lt;a href="https://louwrentius.com/71-tib-diy-nas-based-on-zfs-on-linux.html"&gt;my own NAS&lt;/a&gt; an achieve such speeds using 24 drives.&lt;/p&gt;
&lt;p&gt;RAID also improves the performance of random access patterns. The hard drives in a RAID array work in tandem to service those I/O requests so a RAID array shows significantly higher IOPS than a single drive. More drives means more IOPS. &lt;/p&gt;
&lt;h4&gt;RAID 5 with 8 x 7200 RPM drives&lt;/h4&gt;
&lt;p&gt;The picture below shows the read IOPS performance of an 8-drive RAID 5 array of 1 TB, 7200 RPM drives. We run a benchmark of random 4K read requests.&lt;/p&gt;
&lt;p&gt;Notice how the IOPS increases as the queue depth increases. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/louwrentius/fio-plot-data/master/images/RAID/RAID5_8x1TB/MDADM-RAID-5---8-x-1-TB-%40-7200-RPM-2020-03-23_014228.png"&gt;&lt;img alt="raidiops" src="https://raw.githubusercontent.com/louwrentius/fio-plot-data/master/images/RAID/RAID5_8x1TB/MDADM-RAID-5---8-x-1-TB-%40-7200-RPM-2020-03-23_014228.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;However, nothing is free in this world. A higher queue depth - which acts as a buffer - does increase latency.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/louwrentius/fio-plot-data/master/images/RAID/RAID5_8x1TB/MDADM-RAID-5---8-x-1-TB-%40-7200-RPM-2020-03-23_014206.png"&gt;&lt;img alt="raidlat" src="https://raw.githubusercontent.com/louwrentius/fio-plot-data/master/images/RAID/RAID5_8x1TB/MDADM-RAID-5---8-x-1-TB-%40-7200-RPM-2020-03-23_014206.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Notice how quickly the latency exceeds 20ms and quickly becomes almost unusable.&lt;/p&gt;
&lt;h4&gt;RAID 5 with 8 x 10,000 RPM drives&lt;/h4&gt;
&lt;p&gt;Below is the result of a similar test with 10,000 RPM hard drives. Notice how much better the IOPS and latency figures are.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/raid510krpmiops.png"&gt;&lt;img alt="raid10kiops" src="https://louwrentius.com/static/images/raid510krpmiops.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The latency looks much better:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/raid510krpmlat.png"&gt;&lt;img alt="raid10klat" src="https://louwrentius.com/static/images/raid510krpmlat.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It makes sense to put SSDs in RAID. Although they are more reliable than hard drives, they can fail. If you care about availability, RAID is inevitable. Furthermore, you can observe the same benefits as with hard drives: you pool resources together, achieving higher IOPS figures and more capacity than possible with a single SSD.&lt;/p&gt;
&lt;h3&gt;Capacity vs. Performance&lt;/h3&gt;
&lt;p&gt;The following is mostly focussed on hard drives although it could be true for solid state drives as well. &lt;/p&gt;
&lt;p&gt;We put hard drives in RAID arrays to get more IOPS than a single drive can provide. At some point - as the workload increases - we may hit the maximum number of IOPS the RAID array can sustain with an acceptable latency. &lt;/p&gt;
&lt;p&gt;This IOPS/Latency threshold could be reached even if we have only 50% of the storage capacity of our RAID array in use. If we use the RAID array to host virtual machines for instance, we cannot add more virtual machines because this would cause the latency to rise to unacceptable levels. &lt;/p&gt;
&lt;p&gt;It may feel like a lot of good storage space is going to waste, and in some sense this may be true. For this reason, it could be a wise strategy to buy smaller 10,000 RPM or 15,000 RPM drives purely for the IOPS they can provide and forgo on capacity.&lt;/p&gt;
&lt;p&gt;So it might be the case that you may have to order and add let's say 10 more hard drives to meet the IOPS/Latency demands while there's still plenty of space left.&lt;/p&gt;
&lt;p&gt;This kind of situation is less likely as SSDs have taken over the role of the performance storage layer and (larger capacity) hard drives are pushed in the role of 'online' archival storage.&lt;/p&gt;
&lt;h3&gt;Closing words&lt;/h3&gt;
&lt;p&gt;I hope this article has given you a better understanding of storage performance. Although it is just an introduction, it may help you to better understand the challenges of storage performance. &lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:throughput"&gt;
&lt;p&gt;https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics#Data_transfer_rate&amp;#160;&lt;a class="footnote-backref" href="#fnref:throughput" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="storage"></category></entry><entry><title>Difference of behavior in SATA Solid State Drives</title><link href="https://louwrentius.com/difference-of-behavior-in-sata-solid-state-drives.html" rel="alternate"></link><published>2020-01-29T00:00:00+01:00</published><updated>2020-01-29T00:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2020-01-29:/difference-of-behavior-in-sata-solid-state-drives.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: I've noticed some strange behavior of SSDs when benchmarking them with FIO. After further investigation and additional testing, I've found the reason for the strange patterns in the graphs. &lt;/p&gt;
&lt;p&gt;The 'strange' test results are due to the fact that they were obtained by connecting the SSDs to a …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: I've noticed some strange behavior of SSDs when benchmarking them with FIO. After further investigation and additional testing, I've found the reason for the strange patterns in the graphs. &lt;/p&gt;
&lt;p&gt;The 'strange' test results are due to the fact that they were obtained by connecting the SSDs to a P420I controller. As the HBA mode of this controller performs worse than the RAID mode, I used the RAID mode of this controller. Indvidual drives were put in a RAID0 volume. But it turns out that this creates a strange interaction between RAID controller and SSD.&lt;/p&gt;
&lt;p&gt;Additional testing with an SATA 300 AHCI controller shows 'normal' patterns that look similar to the results of the INTEL SSD as compared to the other ones (Samsung and Kingston). &lt;/p&gt;
&lt;p&gt;It seems I've made a mistake by using the P420i controller for testing. I have includes both 'bad' and 'good' results. &lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;Regular SATA solid state drives may seem interchangeable at this point. They all show amazing IOPS and latency performance.&lt;/p&gt;
&lt;p&gt;&lt;del&gt;I have performed benchmarks on different SSDs from different vendors and it seems that they actually show very different behaviour. This behavior has come to light because I benchmarked the entire device capacity.&lt;/del&gt;&lt;/p&gt;
&lt;p&gt;The benchmark - performed with &lt;a href="https://github.com/axboe/fio"&gt;FIO&lt;/a&gt; - puts a fifty percent read/write random 4K workload on the device. The benchmark stops when all sectors of the device have been read or written to. Furthermore, all tests are performed with a queue depth of 1.&lt;/p&gt;
&lt;p&gt;I've made this post because I found the results interesting. At least the images show a very peculiar pattern for some SSDs. I can't explain them really, maybe you can. &lt;/p&gt;
&lt;p&gt;This is the test I ran against the SSDs.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;fio --filename=/dev/sdX --direct=1 --rw=randrw --refill_buffers
--norandommap --ioengine=libaio --bs=4k --rwmixread=50 --iodepth=1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Disclaimer&lt;/h2&gt;
&lt;p&gt;I've performed these benchmark to the best of my knowledge. The raw benchmark data is available &lt;a href="https://github.com/louwrentius/fio-plot-data/tree/master/benchmark_data/HPDL380G8/RAID/NEW/FULL"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It's always possible that I made a mistake, so it may be wise to run your own tests to see if you can replicate these results. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Caveat&lt;/strong&gt;: I really don't know if these benchmark results impact real-life performance. Maybe these benchmark results show a kind of behaviour of SSDs that doesn't really matter in the end.&lt;/p&gt;
&lt;h2&gt;Benchmark Results&lt;/h2&gt;
&lt;h3&gt;Intel D3-S4610&lt;/h3&gt;
&lt;p&gt;This SSD is meant for for datacenter usage. This is the test result on the P420i controller.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/INTEL_D3-S4610_Random_RW_(50%25)_Full_Device_Read+Write_2019-11-28_144750.png?raw=true"&gt;&lt;img alt="intel" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/INTEL_D3-S4610_Random_RW_(50%25)_Full_Device_Read+Write_2019-11-28_144750.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;IOPS and Latency is consistent during the whole benchmark. It's behaviour seems predictable. &lt;/p&gt;
&lt;p&gt;This is the test result on the AHCI controller:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/INTEL-D3-S4610-ON-AHCI-SATA-300-FULL-DISK-4K-RANDOM-READ-WRITE-50%25-2020-01-31_111850.png?raw=true"&gt;&lt;img alt="intelahci" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/INTEL-D3-S4610-ON-AHCI-SATA-300-FULL-DISK-4K-RANDOM-READ-WRITE-50%25-2020-01-31_111850.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Samsung 860 Pro&lt;/h3&gt;
&lt;p&gt;This SSD is meant for desktop usage. Its behavior seems quite different from the Intel SSD. I have separated the IOPS data from the Latency data to make the graphs more eligible. &lt;/p&gt;
&lt;p&gt;This is the test result on the P420i controller.&lt;/p&gt;
&lt;h4&gt;IOPS&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-860-PRO-RANDOM-RW-(50%25)-Full-Device-Read+Write-IOPS-2020-01-29_171841.png?raw=true"&gt;&lt;img alt="samsung860iops" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-860-PRO-RANDOM-RW-(50%25)-Full-Device-Read+Write-IOPS-2020-01-29_171841.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Latency&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-860-PRO-RANDOM-RW-(50%25)-Full-Device-Read+Write-Latency-2020-01-29_171858.png?raw=true"&gt;&lt;img alt="samsung860latency" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-860-PRO-RANDOM-RW-(50%25)-Full-Device-Read+Write-Latency-2020-01-29_171858.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The best-case latency is almost four times better than the worst-case latency. Latency is thus less predictable. This impact also seems to be reflected in the IOPs numbers. &lt;/p&gt;
&lt;p&gt;This is the test result on the AHCI controller:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-860-PRO-ON-AHCI-SATA-300-FULL-DISK-4K-RANDOM-READ-WRITE-50%25-2020-01-30_231329.png?raw=true"&gt;&lt;img alt="samsung860ahci" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-860-PRO-ON-AHCI-SATA-300-FULL-DISK-4K-RANDOM-READ-WRITE-50%25-2020-01-30_231329.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Samsung PM883&lt;/h2&gt;
&lt;p&gt;This SSD is meant for datacenter usage. This is the test result on the P420i controller.&lt;/p&gt;
&lt;h3&gt;IOPS&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-PM883-RANDOM-RW-(50%25)-Full-Device-Read+Write-IOPS-2020-01-29_174112.png?raw=true"&gt;&lt;img alt="samsungpm883iops" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-PM883-RANDOM-RW-(50%25)-Full-Device-Read+Write-IOPS-2020-01-29_174112.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Latency&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-PM883-RANDOM-RW-(50%25)-Full-Device-Read+Write-Latency-2020-01-29_174135.png?raw=true"&gt;&lt;img alt="samsungpm883latency" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-PM883-RANDOM-RW-(50%25)-Full-Device-Read+Write-Latency-2020-01-29_174135.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This SSD seems to behave in a similar way as the 860 PRO.&lt;/p&gt;
&lt;p&gt;This is the test result on the AHCI controller:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-PM883-ON-AHCI-SATA-300-FULL-DISK-4K-RANDOM-READ-WRITE-50%25-2020-01-31_144732.png?raw=true"&gt;&lt;img alt="samsungpm8832ahci" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/SAMSUNG-PM883-ON-AHCI-SATA-300-FULL-DISK-4K-RANDOM-READ-WRITE-50%25-2020-01-31_144732.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Kingston DC500M&lt;/h2&gt;
&lt;p&gt;This SSD is meant for datacenter usage. This is the test result on the P420i controller.&lt;/p&gt;
&lt;h3&gt;IOPS&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/KINGSTON-DC500M-RANDOM-RW-(50%25)-Full-Device-Read+Write-IOPS-2020-01-29_175551.png?raw=true"&gt;&lt;img alt="kingstondc500miops" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/KINGSTON-DC500M-RANDOM-RW-(50%25)-Full-Device-Read+Write-IOPS-2020-01-29_175551.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;latency&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/KINGSTON-DC500M-RANDOM-RW-(50%25)-Full-Device-Read+Write-Latency-2020-01-29_175612.png?raw=true"&gt;&lt;img alt="kingstondc500mlatency" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/KINGSTON-DC500M-RANDOM-RW-(50%25)-Full-Device-Read+Write-Latency-2020-01-29_175612.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The behavior of this SSD seems similar to the behaviour of the Samsung SSDs but the pattern is distinct: it seems shifted as compared to the Samsung SSDs.&lt;/p&gt;
&lt;p&gt;This is the test result on the AHCI controller:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/louwrentius/fio-plot-data/blob/master/images/KINGSTON-DC500M-ON-AHCI-SATA-300-FULL-DISK-4K-RANDOM-READ-WRITE-50%25-2020-01-31_191351.png?raw=true"&gt;&lt;img alt="kingstondc500mahci" src="https://github.com/louwrentius/fio-plot-data/blob/master/images/KINGSTON-DC500M-ON-AHCI-SATA-300-FULL-DISK-4K-RANDOM-READ-WRITE-50%25-2020-01-31_191351.png?raw=true" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Evaluation&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;** Updated evaluation**&lt;/p&gt;
&lt;p&gt;We can conclude that the P420i RAID controller causes strange behavior not observed when we test the SSDs on a regular AHCI controller. Although this was an older SATA 300 controller, I'm making the assumption that this controller still has enough bandwidth to support a random 4K test as most tests never went beyond 50+ MB/s of throughput.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;At this point, I can only say that I observe quite different behavior between the Intel SSD and the other SSDs from Samsung and Kingston. The problem is that I can't tell if this affects real-life day-to-day application performance.&lt;/p&gt;
&lt;p&gt;It seems that although results for the Samsung and Kingston SSDs fluctuate quite a bit, it's quite possible that the fluctuations occur during a very short timespan and effectively cancel each other out. &lt;/p&gt;
&lt;p&gt;If you have comments, ideas or suggestions, leave a comment below.&lt;/p&gt;
&lt;h2&gt;How are these images generated?&lt;/h2&gt;
&lt;p&gt;All images have been generated with &lt;a href="https://github.com/louwrentius/fio-plot"&gt;fio-plot&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The github repository also contains a &lt;a href="https://github.com/louwrentius/fio-plot-data/tree/master/images"&gt;folder with a lot of example images&lt;/a&gt;.&lt;/p&gt;</content><category term="Storage"></category><category term="storage"></category></entry><entry><title>Benchmarking a 1.44 MB floppy drive</title><link href="https://louwrentius.com/benchmarking-a-144-mb-floppy-drive.html" rel="alternate"></link><published>2019-12-17T00:00:00+01:00</published><updated>2019-12-17T00:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2019-12-17:/benchmarking-a-144-mb-floppy-drive.html</id><summary type="html">&lt;p&gt;I've used &lt;a href="https://github.com/axboe/fio"&gt;fio&lt;/a&gt; - a storage benchmarking tool - to benchmark a 1.44 MB floppy drive to gauge its random I/O performance. I have no real reason for doing this. I just thought it was funny.&lt;/p&gt;
&lt;p&gt;I've run a benchmark against the floppy drives for different queue depths. The results …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I've used &lt;a href="https://github.com/axboe/fio"&gt;fio&lt;/a&gt; - a storage benchmarking tool - to benchmark a 1.44 MB floppy drive to gauge its random I/O performance. I have no real reason for doing this. I just thought it was funny.&lt;/p&gt;
&lt;p&gt;I've run a benchmark against the floppy drives for different queue depths. The results are shared below.&lt;/p&gt;
&lt;h2&gt;Random 4K Read performance&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/louwrentius/fio-plot/master/images/Random_4K_Read_I-O_performance_of_a_1.44_MB_floppy_2019-12-17_004245.png"&gt;&lt;img alt="read" src="https://raw.githubusercontent.com/louwrentius/fio-plot/master/images/Random_4K_Read_I-O_performance_of_a_1.44_MB_floppy_2019-12-17_004245.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Random 4K Write performance&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/louwrentius/fio-plot/master/images/Random_4K_Write_I-O_performance_of_a_1.44_MB_floppy_2019-12-17_004306.png"&gt;&lt;img alt="write" src="https://raw.githubusercontent.com/louwrentius/fio-plot/master/images/Random_4K_Write_I-O_performance_of_a_1.44_MB_floppy_2019-12-17_004306.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;These images show that 1.44 MB a floppy drive have atrocious random I/O performance as compared to hard drives or solid state drives. Who knew!&lt;/p&gt;
&lt;p&gt;Next: benchmarking random I/O performance of a tape drive. If somebody will lend me one.&lt;/p&gt;
&lt;h2&gt;Tools used&lt;/h2&gt;
&lt;p&gt;I used &lt;a href="https://github.com/axboe/fio"&gt;fio&lt;/a&gt; to benchmark the floppy drive. I've used &lt;a href="https://github.com/louwrentius/fio-plot"&gt;fio-plot&lt;/a&gt; to generate the images. The benchmarks were run using an altered version of the benchmark script included with &lt;a href="https://github.com/louwrentius/fio-plot"&gt;fio-plot&lt;/a&gt;.&lt;/p&gt;</content><category term="Hardware"></category><category term="storage"></category></entry><entry><title>Fio-plot: creating nice charts from FIO storage benchmark data</title><link href="https://louwrentius.com/fio-plot-creating-nice-charts-from-fio-storage-benchmark-data.html" rel="alternate"></link><published>2019-11-28T00:00:00+01:00</published><updated>2019-11-28T00:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2019-11-28:/fio-plot-creating-nice-charts-from-fio-storage-benchmark-data.html</id><summary type="html">&lt;h2&gt;New release of fio-plot&lt;/h2&gt;
&lt;p&gt;I've rewritten &lt;a href="https://github.com/louwrentius/fio-plot"&gt;fio-plot&lt;/a&gt;, a tool that turns &lt;a href="https://github.com/axboe/fio"&gt;FIO&lt;/a&gt; benchmark data into nice charts. It allows you to create four types of graphs which will be discussed below. &lt;/p&gt;
&lt;p&gt;The github project page explains how to run this tool. &lt;/p&gt;
&lt;p&gt;Fio-plot also includes a &lt;a href="https://github.com/louwrentius/fio-plot/tree/master/benchmark_script"&gt;benchmark script&lt;/a&gt; that automates testing …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;New release of fio-plot&lt;/h2&gt;
&lt;p&gt;I've rewritten &lt;a href="https://github.com/louwrentius/fio-plot"&gt;fio-plot&lt;/a&gt;, a tool that turns &lt;a href="https://github.com/axboe/fio"&gt;FIO&lt;/a&gt; benchmark data into nice charts. It allows you to create four types of graphs which will be discussed below. &lt;/p&gt;
&lt;p&gt;The github project page explains how to run this tool. &lt;/p&gt;
&lt;p&gt;Fio-plot also includes a &lt;a href="https://github.com/louwrentius/fio-plot/tree/master/benchmark_script"&gt;benchmark script&lt;/a&gt; that automates testing with Fio.&lt;/p&gt;
&lt;p&gt;The git repository also contains &lt;a href="https://github.com/louwrentius/fio-plot/tree/master/benchmark_data"&gt;benchmark data&lt;/a&gt;, which can be used to test fio-plot. &lt;/p&gt;
&lt;h2&gt;2D bar chart&lt;/h2&gt;
&lt;p&gt;This chart plots IOPs and latency for various queue depths. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/impactofqueuedepth.png"&gt;&lt;img alt="2dbar" src="https://louwrentius.com/static/images/impactofqueuedepth.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;3D bar chart&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/servermdadmraid5-3d.png"&gt;&lt;img alt="3dbar" src="https://louwrentius.com/static/images/servermdadmraid5-3d.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;2D line chart for log data&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/impactofqueuedepth02.png"&gt;&lt;img alt="2dline" src="https://louwrentius.com/static/images/impactofqueuedepth02.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;2D bar chart latency histogram&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/histogram01.png"&gt;&lt;img alt="histogram" src="https://louwrentius.com/static/images/histogram01.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Additional images&lt;/h2&gt;
&lt;p&gt;The github repository also contains a &lt;a href="https://github.com/louwrentius/fio-plot/tree/master/images"&gt;folder with a lot of example images&lt;/a&gt;.&lt;/p&gt;</content><category term="Hardware"></category><category term="storage"></category></entry><entry><title>My home lab server with 20 cores / 40 threads and 128 GB memory</title><link href="https://louwrentius.com/my-home-lab-server-with-20-cores-40-threads-and-128-gb-memory.html" rel="alternate"></link><published>2019-08-13T00:00:00+02:00</published><updated>2019-08-13T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2019-08-13:/my-home-lab-server-with-20-cores-40-threads-and-128-gb-memory.html</id><summary type="html">&lt;p&gt;&lt;img alt="dl380gen8" src="https://louwrentius.com/static/images/dl380g8.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I've bought a secondhand HP DL380p Gen8 server and I think it's awesome. &lt;/p&gt;
&lt;p&gt;This machine has dual-processors, with each &lt;a href="https://ark.intel.com/content/www/us/en/ark/products/75277/intel-xeon-processor-e5-2680-v2-25m-cache-2-80-ghz.html"&gt;CPU&lt;/a&gt; having 10 physical cores and two threads per core, for a grand total of 20 cores / 40 threads for the entire machine. It's also equiped with 128 GB or memory …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;img alt="dl380gen8" src="https://louwrentius.com/static/images/dl380g8.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I've bought a secondhand HP DL380p Gen8 server and I think it's awesome. &lt;/p&gt;
&lt;p&gt;This machine has dual-processors, with each &lt;a href="https://ark.intel.com/content/www/us/en/ark/products/75277/intel-xeon-processor-e5-2680-v2-25m-cache-2-80-ghz.html"&gt;CPU&lt;/a&gt; having 10 physical cores and two threads per core, for a grand total of 20 cores / 40 threads for the entire machine. It's also equiped with 128 GB or memory.&lt;/p&gt;
&lt;p&gt;&lt;img alt="htop" src="https://louwrentius.com/static/images/dl380htop01.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Laughs in htop&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I bought this machine because I wanted to have a dedicated server on which I run my home lab environment. This is the box on which I will try out new software, run experiments, test ideas and so on. &lt;/p&gt;
&lt;p&gt;In this article I'd like to share some information about this machine and my experiences with it. Let's start with an overview of the specifications.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: this article featured on Hacker News, &lt;a href="https://news.ycombinator.com/item?id=20687521"&gt;click here&lt;/a&gt; for the discussion thread.&lt;/p&gt;
&lt;h2&gt;HP DL380p Gen8 Specifications&lt;/h2&gt;
&lt;p&gt;&lt;/p&gt;
&lt;table border="0" cellpadding="7" cellspacing="2" &gt;
&lt;tr&gt;&lt;th&gt;Part&lt;/th&gt;&lt;th&gt;Description &lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Form factor&lt;/td&gt;&lt;td&gt;19" 2U rackmount&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Processor&lt;/td&gt;&lt;td &gt;2 x Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;RAM&lt;/td&gt;&lt;td &gt;128 GB DDR3 (16 x 8GB)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Onboard LAN&lt;/td&gt;&lt;td &gt;4 x Broadcom 1 Gigabit (Copper) &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;RAID Controller&lt;/td&gt;&lt;td &gt; HP P420i &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Storage&lt;/td&gt;&lt;td &gt; 8 x 2.5" SAS/SATA slots (no media)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;PSU&lt;/td&gt;&lt;td &gt;1 x 750 Watt&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Power usage&lt;/td&gt;&lt;td &gt;about &amp;nbsp; 110 Watt idle&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;KVM&lt;/td&gt;&lt;td &gt;HP iLO 4  with HTML5 console&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;

&lt;h2&gt;Cost&lt;/h2&gt;
&lt;p&gt;I've bought this server for around 1500 Euro ($1700) including taxes (21%) and a one year warranty. I've bought it from the company &lt;a href="https://creoserver.com"&gt;https://creoserver.com&lt;/a&gt; (The Netherlands).&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update 2021:&lt;/strong&gt; This kind / generation of hardware seems to be dumped on the market for less than half of what I paid for it at the time. &lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;I believe that the price/performance balance for this server is quite good, but I let you be the judge of that.&lt;/p&gt;
&lt;p&gt;The storage is based on some SSDs I already owned and some new SSDs I've bought separately. So the price is based on the chassis + storage caddies but without actual storage media.&lt;/p&gt;
&lt;h2&gt;Age&lt;/h2&gt;
&lt;p&gt;The HP DL380p Gen8 product generation was introduced in 2012. 
The configuration I have chosen is based on a CPU that was introduced in 2013. Based on the serial number I guess the chassis+motherboard is from 2014 (about 5 years old&lt;sup id="fnref:age"&gt;&lt;a class="footnote-ref" href="#fn:age"&gt;1&lt;/a&gt;&lt;/sup&gt; at the time I bought it). &lt;/p&gt;
&lt;h2&gt;CPU - The Intel Xeon E5-2680 v2&lt;/h2&gt;
&lt;p&gt;The Intel Xeon E5-2680 v2 CPU &lt;a href="https://www.anandtech.com/show/7285/intel-xeon-e5-2600-v2-12-core-ivy-bridge-ep"&gt;originated from late 2013&lt;/a&gt; and is based on the ivy bridge architecture. The base clock is 2.8 Ghz but it can turbo boost up to 3.6 Ghz. With 10C/20T per CPU and the dual-processor configuration it's needless to say that you can run quite a few virtual machines and/or containers in parallel on this machine, without even oversubscribing CPU cores.&lt;/p&gt;
&lt;p&gt;You may ask what kind of performance you may expect from this box and how 2013 holds up in 2019. A high number of cores is nice, but the cores themselves fast enough? &lt;/p&gt;
&lt;p&gt;The 2680v2 is a processor that sold for about $1700 (Intel Ark) when it came out in 2013. If you look at the list of Xeon processors from that timeframe, it was the top of the line Intel dual-processor compatible CPU with only the E5-2690 v2 processor above it, which ran only 200 Mhz faster (base clock).&lt;/p&gt;
&lt;p&gt;There was almost nothing better at that time and people ran their companies on those machines so I think performance is more than enough. Especially for a lab environment (overkill is probably an understatement).&lt;/p&gt;
&lt;p&gt;I've included some benchmark results below just for reference. You can use this information to make some comparisons to other CPUs that you are familiar with to get some sense of the performance. &lt;/p&gt;
&lt;p&gt;I've also compared scores a little bit and I've noticed that people in the past were perfectly happy gaming on desktop systems with CPUs that have equal or less single core performance.&lt;/p&gt;
&lt;h3&gt;Geekbench 4 score&lt;/h3&gt;
&lt;p&gt;I've run Geekbench on the DL380p Gen8:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Single Core:  3525
Multi Core : 48764
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Passmark&lt;/h3&gt;
&lt;p&gt;(&lt;a href="https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2680+v2+%40+2.80GHz&amp;amp;id=2061"&gt;source&lt;/a&gt;)&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Single Core:  1788
Multi Core : 12612
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Sysbench&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@dora:/home/ansible# sysbench --test=cpu --cpu-max-prime=20000 run
[.....]
General statistics:
    total time:                          10.0002s
    total number of events:              4358
[.....]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Some perspective: My intel Mac mini 2018 and my game pc from 2013 also hit &lt;strong&gt;10 seconds&lt;/strong&gt; runtime. For whoever thinks they should work with 'cheap' Raspberry Pi's, this is the sysbench score of a Raspberry Pi 3B+.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Test execution summary:
    total time:                          328.3016s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It's 32x slower. I really recommend to not use Pi's except for electronic projects. The performance is so bad, it's not worth it. Even a small VM on your desktop/laptop is already way faster. &lt;/p&gt;
&lt;h3&gt;Other benchmarks&lt;/h3&gt;
&lt;p&gt;I won't promise anything, but if it's not too much work I can run a benchmark if you have any suggestions. Feel free to comment down below or email me.&lt;/p&gt;
&lt;h2&gt;Memory&lt;/h2&gt;
&lt;p&gt;I've ordered this machine with 128GB of DDR3 memory. The memory consists of 16 x 8GB 1333Mhz memory modules. The machine has 12 memory slots per CPU so there are still 8 slots left to be filled if I ever feel a need to expand memory.&lt;/p&gt;
&lt;h2&gt;Storage&lt;/h2&gt;
&lt;h3&gt;Drive slots&lt;/h3&gt;
&lt;p&gt;I've chosen a chassis with room for 8 x 2.5" drives. Depending on the chassis model, you can go up to 25 x 2.5" (Small Form Factor) or 12 x 3.5" (Large Form Factor).&lt;/p&gt;
&lt;p&gt;Both SAS and SATA HDDs/SSDs are supported. You can't just stick HDDs or SSDs in the drive slots, you need to put your HDD/SSD first in an empty caddy, I bought those with the server.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/caddy02.jpg"&gt;&lt;img alt="caddy" src="https://louwrentius.com/static/images/caddy02.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The caddy has the LED lights build-in to signal the drive status. It has some electrical contacts at the back for those lights, it's not just a plastic 'sabot'.&lt;/p&gt;
&lt;p&gt;My caddies tended to be a bit fragile, they came apart when trying to pull a drive out of the cage, but nothing broke and it all works fine.&lt;/p&gt;
&lt;h3&gt;Drive compatibility&lt;/h3&gt;
&lt;p&gt;HP servers can be picky about which HDD disk drives work or won't work. They all technically work, but sometimes the disk temperature cannot be read by the controller. In that case, the system fans will slowly start to spin faster and faster until almost maximum speed. This is obviously not workable.&lt;/p&gt;
&lt;p&gt;Smaller HDD drives did work fine and the server stayed quiet. Only some 500GB HDD drives caused 'thermal runaway'. &lt;/p&gt;
&lt;p&gt;I've found &lt;a href="http://dascomputerconsultants.com/HPCompaqServerDrives.htm"&gt;this table&lt;/a&gt; of supported and unsupported drives for HP controllers.&lt;/p&gt;
&lt;h3&gt;Overview of SSDs that are compatible&lt;/h3&gt;
&lt;p&gt;I've tested these SSDs:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;- Samsung 860 Pro (consumer)
- Samsung PM833   (enterprise)
- Intel D3-S4610  (enterprise)
- Crucial MX200   (consumer)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;P420i RAID mode vs. HBA mode&lt;/h3&gt;
&lt;p&gt;Important: you can't boot from the storage controller in HBA mode. You are left with booting from internal/external USB (2.0) or internal SD card.&lt;/p&gt;
&lt;p&gt;I'm not sure if there is any significant performance difference between the two modes for SSDs. I just want to be able to boot from one of the drives, not from USB/SD card.&lt;/p&gt;
&lt;h3&gt;Storage management - ssacli tool&lt;/h3&gt;
&lt;p&gt;To configure the P420i RAID controller, It's highly recommended to install the ssacli tool from HP. Otherwise you have to reboot and wait 10 minutes (no joke) to enter the 'Smart Array Configuration Utility' to make changes to the storage configuration.&lt;/p&gt;
&lt;p&gt;I just followed these instructions to install this tool on Ubuntu:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Contents of /etc/apt/sources.list.d/hp.list:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;deb http://downloads.linux.hpe.com/SDR/repo/mcp/Ubuntu bionic/current non-free
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;For Debian: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;deb http://downloads.linux.hpe.com/SDR/repo/mcp/debian bookworm/current non-free
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Installing the software:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get update
apt-get install ssacli
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Some examples:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ssacli ctrl all show detail
ssacli ctrl slot=0 array all show detail
ssacli ctrl slot=0 ld all show detail
ssacli ctrl slot=0 pd all show detail
ssacli ctrl slot=0 ld 7 delete
ssacli ctrl slot=0 create type=ld raid=0 drives=2I:2:7 ss=8 ssdopo=off
ssacli ctrl slot=0 array i modify ssdsmartpath=disable
ssacli ctrl slot=0 pd 1 modify led=on
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;RAID vs non-RAID&lt;/h3&gt;
&lt;p&gt;You are forced by the RAID controller to always create a RAID array, even if you don't need the redundancy or performance.&lt;/p&gt;
&lt;p&gt;I've chosen to just put SSDs in individual RAID0 logical drives, making them effectively individual drives from the perspective of the operating system. I'm not running a mission-critical application so this is fine for me.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/hpfront01.jpg"&gt;&lt;img alt="frontside" src="https://louwrentius.com/static/images/hpfront01.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A small downside is that the bright red square light on each drive will permanently glow up. That light signals truthfully that you cannot remove a drive without killing the 'array'. I've not seen any option to turn this off, which I can understand.&lt;/p&gt;
&lt;h2&gt;Sound / Noise level&lt;/h2&gt;
&lt;h3&gt;Idle sound level&lt;/h3&gt;
&lt;p&gt;I've taken some sound measurements using my iPhone with an app, so how reliable this is, I don't know but it's something.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;During boot time (full fan speed)  : 62 dB
Idle, booted into operating system : 50 dB
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;To set some expectations:&lt;/p&gt;
&lt;p&gt;My very subjective opinion is that at 50 dB the sound level is reasonable for a server like this, but it's definitely not quiet. I would not be able to work, relax or sleep with this server in the same room.&lt;/p&gt;
&lt;p&gt;Although this server is fairly quiet at idle, it does need its own dedicated room. When the door is closed, you won't hear it at idle, but under sustained load, you will hear high pitched fan noise even with a closed door.&lt;/p&gt;
&lt;h3&gt;Impact of adding a PCIe card on noise level&lt;/h3&gt;
&lt;p&gt;When you put any kind of PCIe card in the server, two of its six fans in-line with the PCIe expansion slot will run at 40%+ to cool these cards.&lt;/p&gt;
&lt;p&gt;The sound level will become a bit annoying. I've not found any option to disable this behavior. This means that if you need to expand the server with additional network ports or other components, the server really needs a room with a closed door. &lt;/p&gt;
&lt;p&gt;Please note that if you want to keep noise levels down but want to upgrade to 10Gbit networking, you could opt to configure the server with Flexlom 2 x 10Gbit instead of the stock Flexlom 4 x 1 Gbit copper. This will give you more bandwidth without the need to add a PCIe card.&lt;/p&gt;
&lt;p&gt;At this time, I have no need to add any high-speed networking as I can run the simulations all on the machine itself (at least that's the plan).&lt;/p&gt;
&lt;h2&gt;Idle power usage&lt;/h2&gt;
&lt;p&gt;The DL380p has several BIOS settings that trade performance for (idle) power consumption. I have tested the performance and idle power usage for two power profiles:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;HP Power Profile: Balanced Power and Performance
Present Power Reading: 76 Watts
Geekbench Single Core score: 2836

HP Power Profile: Static High Performance
Present Power Reading:  100 Watts
Geekbench Single Core score: 3525
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Please note:&lt;/strong&gt; that actual power usage includes +10 Watt for the iLO interface, to the system would use &lt;em&gt;86 Watt&lt;/em&gt; or &lt;em&gt;110 Watt&lt;/em&gt; at the wall outlet.&lt;/p&gt;
&lt;p&gt;What we can learn from the above test results is that the high performance setting uses 31% more power at idle and in return you get 20% more single core performance.&lt;/p&gt;
&lt;p&gt;The server supports (obviously) dual power supplies but that option only adds cost, increases power usage and gains me nothing. I'm not running a mission-critical business with this server.&lt;/p&gt;
&lt;p&gt;By default, this server is turned off. When I want to do some work, I turn the server on using wake-on-lan. I do this with all my equipment except for my router/firewall.&lt;/p&gt;
&lt;h2&gt;Boot up time&lt;/h2&gt;
&lt;p&gt;I clocked 4 minutes and 28 seconds until I got a ping reply from the operating system. &lt;/p&gt;
&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/ekSCmTgMh7k" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen&gt;&lt;/iframe&gt;

&lt;h2&gt;KVM / iLO (Remote Management)&lt;/h2&gt;
&lt;p&gt;As with almost all HP servers, this machine has a dedicated remote management port + engine (iLO) so you can do everything remotely, such as powering the server on/off. You also have access to a virtual console (KVM over IP) and virtual CD-ROM drive to remotely provision the server.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/hpback01.jpg"&gt;&lt;img alt="backside" src="https://louwrentius.com/static/images/hpback01.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For those of you who are less familiar with this hardware: notice the network interface left of the VGA connector. This is the iLO remote management interface. It's a mini computer with its own operatingsystem and IP-address, thus reachable from the network. &lt;/p&gt;
&lt;p&gt;In the past iLO or similar solutions were a pain to use because you need Java or .NET to use the virtual console. With the &lt;a href="https://www.storagereview.com/hpe_releases_ilo_4_270_firmware_update"&gt;ilO 2.7 update&lt;/a&gt; you will have a HTML5 interace, doing away with the need for both Java as .NET. This is a huge usability improvement.&lt;/p&gt;
&lt;h2&gt;Operating system&lt;/h2&gt;
&lt;p&gt;I'm currently running Ubuntu Linux 18.04 and using plain vanilla KVM to spawn virtual machines, wich works fine. Everything is maintained/configured with Ansible.&lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;I hope this overview was informative. Maybe it could be an option for yourself to consider if you ever want to setup a home lab yourself. &lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:age"&gt;
&lt;p&gt;As far as I know this is not 100% reliable as the motherboard serial number can be changed to match the chassis when it's replaced due to failure. I guess it will have to do.&amp;#160;&lt;a class="footnote-backref" href="#fnref:age" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Hardware"></category><category term="Server"></category></entry><entry><title>My Ceph test cluster based on Raspberry Pi's and HP MicroServers</title><link href="https://louwrentius.com/my-ceph-test-cluster-based-on-raspberry-pis-and-hp-microservers.html" rel="alternate"></link><published>2019-01-27T00:00:00+01:00</published><updated>2019-01-27T00:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2019-01-27:/my-ceph-test-cluster-based-on-raspberry-pis-and-hp-microservers.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;To learn more about Ceph, I've build myself a Ceph Cluster based on actual hardware. In this blogpost I'll discus the cluster in more detail and I've also included (&lt;a href="http://freecode.com/projects/fio"&gt;fio&lt;/a&gt;) benchmark results.&lt;/p&gt;
&lt;p&gt;This is my test Ceph cluster: &lt;/p&gt;
&lt;p&gt;&lt;img alt="picluster" src="https://louwrentius.com/static/images/piclustersmall.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The cluster consists of the following components:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt; 3 x Raspberry Pi …&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;To learn more about Ceph, I've build myself a Ceph Cluster based on actual hardware. In this blogpost I'll discus the cluster in more detail and I've also included (&lt;a href="http://freecode.com/projects/fio"&gt;fio&lt;/a&gt;) benchmark results.&lt;/p&gt;
&lt;p&gt;This is my test Ceph cluster: &lt;/p&gt;
&lt;p&gt;&lt;img alt="picluster" src="https://louwrentius.com/static/images/piclustersmall.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The cluster consists of the following components:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt; 3 x Raspberry Pi 3 Model B+ as Ceph monitors
 4 x HP MicroServer as OSD nodes (3 x Gen8 + 1 x Gen10)
 4 x 4 x 1 TB drives for storage (16 TB raw)
 3 x 1 x 250 GB SSD (750 GB raw)
 2 x 5-port Netgear switches for Ceph backend network (bonding)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Monitors: Raspberry Pi 3 Model B+&lt;/h2&gt;
&lt;p&gt;I've done some work getting &lt;a href="https://louwrentius.com/compiling-ceph-on-the-raspberry-pi-3b-armhf-using-clangllvm.html"&gt;Ceph compiled&lt;/a&gt; on a Raspberry Pi 3 Model B+ running Raspbian. I'm using three Raspberry Pi's as Ceph monitor nodes. The Pi boards don't break a sweat with this small cluster setup.&lt;/p&gt;
&lt;p&gt;Note: Raspberry Pi's are not an ideal choice as a monitor node because Ceph Monitors write data (probably the cluster state) to disk every few seconds. This will wear out the SD card eventually. &lt;/p&gt;
&lt;h2&gt;Storage nodes: HP MicroServer&lt;/h2&gt;
&lt;p&gt;The storage nodes are based on four HP MicroServers. I really like these small boxes, they are sturdy, contain server-grade components, including ECC-memory and have room for four internal 3.5" hard drives. You can also install 2.5" hard drives or SSDs. &lt;/p&gt;
&lt;p&gt;For more info on the &lt;a href="https://louwrentius.com/zfs-performance-on-hp-proliant-microserver-gen8-g1610t.html"&gt;Gen8&lt;/a&gt; and the &lt;a href="https://louwrentius.com/hp-proliant-microserver-gen10-as-router-or-nas.html"&gt;Gen10&lt;/a&gt; click on their links.&lt;/p&gt;
&lt;p&gt;Unfortunately the Gen8 servers are no longer made. The replacement, the Gen10 model, lacks IPMI/iLO and is also much more expensive (in Europe at least).&lt;/p&gt;
&lt;h4&gt;CPU and RAM&lt;/h4&gt;
&lt;p&gt;All HP Microservers have a dual-core CPU. The Gen8 servers have 10GB RAM and the Gen10 server has 12GB RAM. I've just added an 8GB ECC memory module to each server, the Gen10 comes with 4GB and the Gen8 came with only 2GB, which explains the difference.&lt;/p&gt;
&lt;h4&gt;Boot drive&lt;/h4&gt;
&lt;p&gt;The systems all have an (old) internal 2.5" laptop HDD connected to the internal USB 2.0 header using an USB enclosure.&lt;/p&gt;
&lt;h4&gt;Ceph OSD HDD&lt;/h4&gt;
&lt;p&gt;All servers are fitted with four (old) 1TB 7200 RPM 3.5" hard drives, so the entire cluster contains 16 x 1TB drives. &lt;/p&gt;
&lt;h4&gt;Ceph OSD SSD&lt;/h4&gt;
&lt;p&gt;There is a fifth SATA connector on the motherboard, meant for an optional optical drive, which I have no use for and wich is not included with the servers. &lt;/p&gt;
&lt;p&gt;I use this SATA connector in the Gen8 MicroServers to attach a Crucial 250GB SSD, which is then tucked away at the top, where the optical drive would sit. So the Gen8 servers have an SSD installed which the Gen10 is lacking.&lt;/p&gt;
&lt;p&gt;The entire cluster thus has 3 x 250GB SSDs installed. &lt;/p&gt;
&lt;h2&gt;Networking&lt;/h2&gt;
&lt;p&gt;All servers have two 1Gbit network cards on-board and a third one installed in one of the half-height PCIe slots&lt;sup id="fnref:1"&gt;&lt;a class="footnote-ref" href="#fn:1"&gt;1&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;&lt;img alt="backside" src="https://louwrentius.com/static/images/cephclusterback.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The half-height PCIe NICs connect the Microservers to the &lt;em&gt;public&lt;/em&gt; network. The internal gigabit NICs are configured in a &lt;a href="https://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html"&gt;bond&lt;/a&gt; (round-robin) and connected to two 5-port Netgear gigabit switches. This is the &lt;em&gt;cluster&lt;/em&gt; network or the backend network Ceph uses for replicating data between the storage nodes.&lt;/p&gt;
&lt;p&gt;You may notice that the first onboard NIC of each server is connected to the top switch and the second one is connected to the bottom switch. This is necessary because linux round-robin bonding requires either separate VLANs for each NIC or in this case separate switches.&lt;/p&gt;
&lt;h2&gt;Benchmarks&lt;/h2&gt;
&lt;h3&gt;Benchmark conditions&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;The tests ran on a physical Ceph client based on an older dual-core CPU and 8GB of RAM. This machine was connected to the cluster with a single gigabit network card.&lt;/li&gt;
&lt;li&gt;I've mapped RBD block devices from the HDD pool and the SSD pool on this machine for benchmarking. &lt;/li&gt;
&lt;li&gt;All tests have been performed on the raw /dev/rbd0 device, not on any file or filesystem. &lt;/li&gt;
&lt;li&gt;The pools use replication with a minimal copy count of 1 and a maximum of 3. &lt;/li&gt;
&lt;li&gt;All benchmarks have been performed with FIO. &lt;/li&gt;
&lt;li&gt;All benchmarks used random 4K reads/writes&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;    NAME     ID     USED        %USED     MAX AVAIL     OBJECTS
    hdd      36     1.47TiB     22.64       5.03TiB      396434
    ssd      38      200GiB     90.92       20.0GiB       51204
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Benchmark SSD&lt;/h3&gt;
&lt;p&gt;Click on the images below to see a larger version. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/ssd-3d-iops-jobsrandread-iops.png"&gt;&lt;img alt="a" src="https://louwrentius.com/static/images/ssd-3d-iops-jobsrandread-iops.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/ssd-3d-iops-jobsrandread-lat.png"&gt;&lt;img alt="b" src="https://louwrentius.com/static/images/ssd-3d-iops-jobsrandread-lat.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/ssd-3d-iops-jobsrandwrite-iops.png"&gt;&lt;img alt="c" src="https://louwrentius.com/static/images/ssd-3d-iops-jobsrandwrite-iops.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/ssd-3d-iops-jobsrandwrite-lat.png"&gt;&lt;img alt="d" src="https://louwrentius.com/static/images/ssd-3d-iops-jobsrandwrite-lat.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/hp-ceph-2d-randread-ssd.png"&gt;&lt;img alt="e" src="https://louwrentius.com/static/images/hp-ceph-2d-randread-ssd.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/hp-ceph-2d-randwrite-ssd.png"&gt;&lt;img alt="f" src="https://louwrentius.com/static/images/hp-ceph-2d-randwrite-ssd.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Benchmark HDD&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/hdd-3d-iops-jobsrandread-iops.png"&gt;&lt;img alt="g" src="https://louwrentius.com/static/images/hdd-3d-iops-jobsrandread-iops.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/hdd-3d-iops-jobsrandread-lat.png"&gt;&lt;img alt="h" src="https://louwrentius.com/static/images/hdd-3d-iops-jobsrandread-lat.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/hdd-3d-iops-jobsrandwrite-iops.png"&gt;&lt;img alt="i" src="https://louwrentius.com/static/images/hdd-3d-iops-jobsrandwrite-iops.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/hdd-3d-iops-jobsrandwrite-lat.png"&gt;&lt;img alt="j" src="https://louwrentius.com/static/images/hdd-3d-iops-jobsrandwrite-lat.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/hp-ceph-2d-randread-hdd.png"&gt;&lt;img alt="k" src="https://louwrentius.com/static/images/hp-ceph-2d-randread-hdd.png" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/hp-ceph-2d-randwrite-hdd.png"&gt;&lt;img alt="l" src="https://louwrentius.com/static/images/hp-ceph-2d-randwrite-hdd.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;Benchmark evaluation&lt;/h2&gt;
&lt;p&gt;The random read performance of the hard drives seems unrealistic at higher queue depths and number of simultaneous jobs. This performance cannot be sustained purely on the basis that 16 hard drives with maybe 70 random IOPs each can only sustain 1120 random IOPs.&lt;/p&gt;
&lt;p&gt;I cannot explain why I get these numbers. If anybody has a suggestion, feel free to comment/respond. Maybe the total of 42GB of memory across the cluster may act as some kind of cache.&lt;/p&gt;
&lt;p&gt;Another interesting observation is that a low number of threads and a small IO queue depth results in fairly poor performance, both for SSD and HDD media. &lt;/p&gt;
&lt;p&gt;Especially the performance of the SSD pool is poor with a low IO queue depth. A probable cause is that these SSDs are consumer-grade and don't perform well with low queue depth workloads. &lt;/p&gt;
&lt;p&gt;I find it interesting that even over a single 1Gbit link, the SSD-backed pool is able to sustain 20K+ IOPs at higher queue depths and larger number of threads.&lt;/p&gt;
&lt;p&gt;The small number of storage nodes and the low number of OSDs per node doesn't make this setup ideal but it does seem to perform fairly decent, considering the hardware involved. &lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;You may notice that the Pi's are missing in the picture because this is an older picture when I was running the monitors as virtual machines on hardware not seen in the picture.&amp;#160;&lt;a class="footnote-backref" href="#fnref:1" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="Ceph"></category></entry><entry><title>Compiling Ceph on the Raspberry Pi 3B+ (armhf) using Clang/LLVM</title><link href="https://louwrentius.com/compiling-ceph-on-the-raspberry-pi-3b-armhf-using-clangllvm.html" rel="alternate"></link><published>2018-11-10T04:00:00+01:00</published><updated>2018-11-10T04:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2018-11-10:/compiling-ceph-on-the-raspberry-pi-3b-armhf-using-clangllvm.html</id><summary type="html">&lt;h2&gt;UPDATE 2019 / 2020&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;There are &lt;a href="https://download.ceph.com/debian-nautilus/pool/main/c/ceph/"&gt;official ARM64 binaries of Ceph&lt;/a&gt; that you can run on a &lt;a href="https://ubuntu.com/download/raspberry-pi"&gt;64-bit version of Ubuntu 18.04&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important: I consider this page obsolete. I will keep it up for transparency's sake&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In this blog post I'll show you how to compile Ceph Luminous for …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;UPDATE 2019 / 2020&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;There are &lt;a href="https://download.ceph.com/debian-nautilus/pool/main/c/ceph/"&gt;official ARM64 binaries of Ceph&lt;/a&gt; that you can run on a &lt;a href="https://ubuntu.com/download/raspberry-pi"&gt;64-bit version of Ubuntu 18.04&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important: I consider this page obsolete. I will keep it up for transparency's sake&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In this blog post I'll show you how to compile Ceph Luminous for the Raspberry Pi 3B+.&lt;/p&gt;
&lt;p&gt;If you follow the instructions below you can compile Ceph on Raspbian. A note of warning: we will compile Ceph on the Raspberry Pi itself which takes a lot of time.&lt;/p&gt;
&lt;p&gt;Ubuntu has packages for Ceph on armhf but I was never able to get Ubuntu working properly on the Raspberry Pi 3B+. Maybe that's just me and I did something wrong. Using existing Ceph packages on Ubuntu would probably be the fastest way to get up and running on the Raspberry Pi if it works for you.&lt;/p&gt;
&lt;p&gt;This is my test Ceph cluster: &lt;/p&gt;
&lt;p&gt;&lt;img alt="picluster" src="https://louwrentius.com/static/images/piclustersmall.jpg" /&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Raspberry&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Pi&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="nv"&gt;B&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Ceph&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;monitors&lt;/span&gt;.&lt;span class="w"&gt; &lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;HP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Microserver&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;OSD&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;nodes&lt;/span&gt;.
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;TB&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;drives&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;storage&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;TB&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;raw&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;250&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;GB&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;SSD&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;750&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;GB&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;raw&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;x&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Netgear&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;switches&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Ceph&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;backend&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;network&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;bonding&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;For the impatient&lt;/h2&gt;
&lt;p&gt;If you just want the packages you can download &lt;a href="https://louwrentius.com/files/ceph-on-armhf-12.2.9.tgz"&gt;this file&lt;/a&gt; and you'll get a set of .deb files which you need to install on your Raspberry Pi. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;SECURITY WARNING&lt;/strong&gt;: these packages are created by me, an unknown, untrusted person on the internet. As a general rule you should &lt;em&gt;not&lt;/em&gt; download and install these packages as they could be malicious for all you know. If you want to be safe, compile Ceph yourself. &lt;/p&gt;
&lt;p&gt;Skip to the section about installing the packages at the end for further installation instructions. &lt;/p&gt;
&lt;h2&gt;The problem with compiling Ceph for armhf&lt;/h2&gt;
&lt;p&gt;There are no armhf packages for Ceph because if you try to compile Ceph on armhf the compiler (gcc) will run out of virtual memory (about three gigabytes).&lt;/p&gt;
&lt;h2&gt;The solution&lt;/h2&gt;
&lt;p&gt;&lt;a href="http://tracker.ceph.com/issues/23387"&gt;Daniel Glaser discovered&lt;/a&gt; that he could actually compile Ceph on armhf by using Clang/LLVM as the C++ compiler. This compiler seems to use less memory and thus stay within the 3 GB memory boundary. This is why he and I were able to compile Ceph.&lt;/p&gt;
&lt;h2&gt;How to compile Ceph for armhf - preparation&lt;/h2&gt;
&lt;h3&gt;The challenge: one gigabyte of memory&lt;/h3&gt;
&lt;p&gt;The Raspberry Pi 3B+ has only one gigabyte of memory but we need more. The only way to add memory is to use swap on disk, as far as I know. &lt;/p&gt;
&lt;p&gt;If you use storage as a substitute for RAM memory, you need fast storage, so it's really recommended to use an external SSD drive that you connect through USB. You also may need sufficient storage, I'd recommend 20+ GB. &lt;/p&gt;
&lt;p&gt;SD memory cards are not up to the task regarding being used as swap. You'll wear them out prematurely and performance is abysmal. You should really use an external SSD.&lt;/p&gt;
&lt;h3&gt;Preparing the external SSD&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Attach the SSD drive to the Raspberry Pi with USB
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;The SSD will probably show up as '/dev/sda'.&lt;/li&gt;
&lt;li&gt;mkfs.xfs /dev/sda -f (&lt;strong&gt; this will erase all contents of the SSD &lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;mkdir /mnt/ssd&lt;/li&gt;
&lt;li&gt;mount /dev/sda /mnt/ssd&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Creating and activating swap&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;cd /mnt/ssd&lt;/li&gt;
&lt;li&gt;dd if=/dev/zero of=swap.dd bs=1M count=5000&lt;/li&gt;
&lt;li&gt;swapon /mnt/ssd/swap.dd&lt;/li&gt;
&lt;li&gt;swapoff /var/swap &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;By default, Raspbian configures a 100 MB swap file on /var/swap. In order to increase performance and protect the SD card from wearing out, please don't forget this last step to disable this swap file on the SD card.&lt;/p&gt;
&lt;h3&gt;Extra software&lt;/h3&gt;
&lt;p&gt;I would recommend installing 'htop' for real-time monitoring of cpu, memory and swap usage if you like to do so.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;apt-get install htop&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;How to compile Ceph for armhf - building&lt;/h2&gt;
&lt;h3&gt;Installing an alternative C++ compiler (Clang/LLVM)&lt;/h3&gt;
&lt;p&gt;As part of &lt;a href="http://tracker.ceph.com/issues/23387"&gt;Daniel's instructions&lt;/a&gt;, you need to compile and install Clang/LLVM.
I followed his instructions to the letter, I have not tested the Clang/LLVM packages made available through apt.&lt;/p&gt;
&lt;p&gt;Compiling Clang/LLVM takes a lot of time. It took 8 hours to compile LLVM/Clang on a Raspberry Pi 3B+ with make -j3 to limit memory usage.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;real    493m38.472s
user    1223m39.063s
sys 45m45.748s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I'll reproduce the steps from Daniel here:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;apt&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;
&lt;span class="n"&gt;apt&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;essential&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ca&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;certificates&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;vim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;git&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;
&lt;span class="n"&gt;apt&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;libcunit1&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dev&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;libcurl4&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;openssl&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dev&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;python&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;bcrypt&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;python&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;tox&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;python&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;coverage&lt;/span&gt;

&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mnt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ssd&lt;/span&gt;
&lt;span class="n"&gt;mkdir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;git&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;git&lt;/span&gt;
&lt;span class="n"&gt;git&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;clone&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;llvm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;mirror&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;llvm&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;git&lt;/span&gt;
&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;llvm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;
&lt;span class="n"&gt;git&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;clone&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;llvm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;mirror&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;clang&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;git&lt;/span&gt;
&lt;span class="n"&gt;git&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;clone&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;llvm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;mirror&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;lld&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;git&lt;/span&gt;
&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tmp&lt;/span&gt;
&lt;span class="n"&gt;mkdir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;llvm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;llvm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;build&lt;/span&gt;
&lt;span class="n"&gt;cmake&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;G&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;Unix Makefiles&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;DCMAKE_BUILD_TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Release&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;DLLVM_TARGETS_TO_BUILD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ARM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mnt&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ssd&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;git&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;llvm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="n"&gt;make&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;j3&lt;/span&gt;
&lt;span class="n"&gt;make&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;
&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;alternatives&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;local&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;clang&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;
&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;alternatives&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;local&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;clang&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;
&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;alternatives&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cpp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cpp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;local&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;clang&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;cpp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You may chose to build in some other directory, maybe on the SSD itself. I'm not sure if that makes a big difference. Be carefull when using /tmp as all contents are lost after a reboot.&lt;/p&gt;
&lt;h3&gt;Obtaining Ceph&lt;/h3&gt;
&lt;p&gt;There are two options:
1. clone my &lt;a href="https://github.com/louwrentius/ceph"&gt;Luminous fork&lt;/a&gt; containing the branch 'ceph-on-arm' which incorporates all the 'fixed' files that make Ceph build with Clang/LLVM.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Clone the official Ceph repo and use the luminous branche. Next, you edit all the relevant files and make the changes yourself. &lt;a href="https://github.com/louwrentius/ceph/compare/luminous...louwrentius:ceph-on-arm"&gt;Here&lt;/a&gt; you can find a list of all the files and the changes made to them.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I would recommend to just git clone ceph like this: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cd /mnt/ssd
git clone https://github.com/louwrentius/ceph
cd ceph
git checkout ceph-on-arm
git reset --hard
git clean -dxf
git submodule update --init --recursive
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now we first need to install a lot of dependancies on the Raspberry Pi before we can build Ceph. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;run ./install-deps.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This will take some time as a ton of packages will be installed. Once this is done we are ready to compile Ceph itself.&lt;/p&gt;
&lt;h3&gt;Building Ceph&lt;/h3&gt;
&lt;p&gt;So to understand what you are getting into: it took me about 12 hours to compile Ceph on a Raspberry Pi 3 B+&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;real    717m31.457s
user    1319m50.438s
sys 58m7.549s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is the command to run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;./make-debs.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If you want to monitor cpu and memory usage, you can use 'htop' to do so.&lt;/p&gt;
&lt;p&gt;If for some reason the compile proces does fail and you may have to restart compiling ceph after you made some adjustments: &lt;/p&gt;
&lt;p&gt;(you may have to adjust the folder name to match your ceph version)&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nx"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;tmp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;release&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;Raspbian&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;WORKDIR&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;ceph&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m m-Double"&gt;12.2.9&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;39&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;gd51dfb14f4&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;edit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;relevant&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;here&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nx"&gt;dpkg&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;buildpackage&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;j3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;us&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;nc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Once this process is done, you will find a lot of .deb packages in your 
/tmp/release/Raspbian/WORKDIR folder. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Warning&lt;/strong&gt; If you do use /tmp, the first thing to do is to copy all .deb files to a safe location because if you reboot your Pi, you loose 12 hours of work. &lt;/p&gt;
&lt;p&gt;Assuming that you copied all .deb files to a folder like '/deb' you just created, this is how you install these packages:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;dpkg --install *.deb
apt-get install --fix-missing
apt --fix-broken install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is a bit ugly but it worked fine for me. &lt;/p&gt;
&lt;p&gt;You can now just copy over all the .deb files to other Raspbery Pi's and install Ceph on them too. &lt;/p&gt;
&lt;p&gt;Now you are done and you can run Ceph on a Raspberry Pi 3B+. &lt;/p&gt;
&lt;h3&gt;Ceph monitors may wear out the SD card&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt; Running a Ceph monitor node on a Raspberry Pi is not ideal. The core issue is that the Ceph monitor process writes data every few seconds to files within /var/lib/ceph and this may wear out the SD card prematurely. The solution would be to use an external usb hard drive mounted through USB or a regular ssd which is way more resilient to writes than a regular SD card.&lt;/p&gt;</content><category term="Storage"></category><category term="Ceph"></category></entry><entry><title>Understanding Ceph: open-source scalable storage</title><link href="https://louwrentius.com/understanding-ceph-open-source-scalable-storage.html" rel="alternate"></link><published>2018-08-19T04:00:00+02:00</published><updated>2018-08-19T04:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2018-08-19:/understanding-ceph-open-source-scalable-storage.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In this blog post I will try to explain why I believe &lt;a href="https://en.wikipedia.org/wiki/Ceph_(software)"&gt;Ceph&lt;/a&gt; is such an interesting storage solution. After you finished reading this blog post you should have a good high-level overview of Ceph.&lt;/p&gt;
&lt;p&gt;I've written this blog post purely because I'm a storage enthusiast and I find …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;In this blog post I will try to explain why I believe &lt;a href="https://en.wikipedia.org/wiki/Ceph_(software)"&gt;Ceph&lt;/a&gt; is such an interesting storage solution. After you finished reading this blog post you should have a good high-level overview of Ceph.&lt;/p&gt;
&lt;p&gt;I've written this blog post purely because I'm a storage enthusiast and I find Ceph interesting technology.&lt;/p&gt;
&lt;h2&gt;What is Ceph?&lt;/h2&gt;
&lt;p&gt;Ceph is a software-defined storage solution that can scale both in performance and capacity. Ceph is used to build multi-petabyte storage clusters. &lt;/p&gt;
&lt;p&gt;For example, &lt;a href="https://home.cern/about"&gt;Cern&lt;/a&gt; has build a &lt;a href="https://ceph.com/community/new-luminous-scalability/"&gt;65 Petabyte&lt;/a&gt; Ceph storage cluster. I hope that number grabs your attention. I think it's amazing.&lt;/p&gt;
&lt;p&gt;The basic building block of a Ceph storage cluster is the storage node. These storage nodes are just commodity (&lt;a href="https://en.wikipedia.org/wiki/Commercial_off-the-shelf"&gt;COTS&lt;/a&gt;) servers containing a lot of hard drives and/or flash storage.&lt;/p&gt;
&lt;p&gt;&lt;img alt="storage chassis" src="https://louwrentius.com/static/images/sm36.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Example of a storage node&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Ceph is meant to scale. And you scale by adding additional storage nodes. You will need multiple servers to satisfy your capacity, performance and resiliency requirements. And as you expand the cluster with extra storage nodes, capacity, performance and resiliency (if needed) will all increase at the same time. &lt;/p&gt;
&lt;p&gt;It's that simple.&lt;/p&gt;
&lt;p&gt;You don't need to start with petabytes of storage. You can actually start very small, with just a few storage nodes and expand as your needs increase.&lt;/p&gt;
&lt;p&gt;I want to touch upon a technical detail because it illustrates the mindset surrounding Ceph. With Ceph, you don't even need a RAID controller anymore, a 'dumb' HBA is sufficient. This is possible because Ceph manages redundancy in software. A Ceph storage node at it's core is more like a &lt;a href="https://en.wiktionary.org/wiki/JBOD"&gt;JBOD&lt;/a&gt;. The hardware is simple and 'dumb', the intelligence resides all in software.&lt;/p&gt;
&lt;p&gt;This means that the risk of hardware vendor lock-in is quite mitigated. You are not tied to any particular proprietary hardware.&lt;/p&gt;
&lt;h2&gt;What makes Ceph special?&lt;/h2&gt;
&lt;p&gt;At the heart of the Ceph storage cluster is the &lt;a href="https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf"&gt;CRUSH&lt;/a&gt; algoritm, developed by &lt;a href="https://en.wikipedia.org/wiki/Sage_Weil"&gt;Sage Weil&lt;/a&gt;, the co-creator of Ceph. &lt;/p&gt;
&lt;p&gt;The CRUSH algoritm allows storage &lt;em&gt;clients&lt;/em&gt; to &lt;strong&gt;calculate&lt;/strong&gt; which storage node needs to be contacted for retrieving or storing data. The storage client can - &lt;em&gt;on it's own&lt;/em&gt; - determine what to do with data or where to get it. &lt;/p&gt;
&lt;p&gt;So to reiterate: given a particular state of the storage cluster, the client can calculate which storage node to contact for storage or retrieval of data.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Why is this so special?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Because there is &lt;strong&gt;no&lt;/strong&gt; centralised 'registry' that keeps track of the location of data on the cluster (metadata). Such a centralised registry can become:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a performance bottleneck, preventing further expansion&lt;/li&gt;
&lt;li&gt;a single-point-of-failure&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ceph does away with this concept of a centralised registry for data storage and retrieval. This is why Ceph can scale in capacity and performance while assuring availability.&lt;/p&gt;
&lt;p&gt;At the core of the CRUSH algoritm is the &lt;em&gt;CRUSH map&lt;/em&gt;. That map contains information about the storage nodes in the cluster. That map is the basis for the calculations the storage client need to perform in order to decide which storage node to contact.&lt;/p&gt;
&lt;p&gt;This CRUSH map is distributed across the cluster from a special server: the 'monitor' node. Regardless of the size of the Ceph storage cluster, you typically need just three (3) monitor nodes for the whole cluster. Those nodes are contacted by both the storage nodes and the storage clients.&lt;/p&gt;
&lt;p&gt;&lt;img alt="cephoverview" src="https://louwrentius.com/static/images/cephsimple.png" /&gt;&lt;/p&gt;
&lt;p&gt;So Ceph does have some kind of centralised 'registry' but it serves a totally different purpose. It only keeps track of the state of the cluster, a task that is way easier to scale than running a 'registry' for data storage/retrieval itself. &lt;/p&gt;
&lt;p&gt;It's important to keep in mind that the Ceph monitor node does not store or process any metadata. It only keeps track of the CRUSH map for both clients and individual storage nodes. Data always flows directly from the storage node towards the client and vice versa.&lt;/p&gt;
&lt;h2&gt;Ceph Scalability&lt;/h2&gt;
&lt;p&gt;A storage client will contact the appropriate storage node directly to store or retrieve data. There are no components in between, except for the network, which you will need to size accordingly&lt;sup id="fnref:fn1"&gt;&lt;a class="footnote-ref" href="#fn:fn1"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Because there are no intermediate components or proxies that could potentially create a bottleneck, a Ceph cluster can really scale horizontally in both capacity and performance. &lt;/p&gt;
&lt;p&gt;And while scaling storage and performance, data is protected by redundancy.&lt;/p&gt;
&lt;h2&gt;Ceph redundancy&lt;/h2&gt;
&lt;h3&gt;Replication&lt;/h3&gt;
&lt;p&gt;In a nutshell, Ceph does 'network' RAID-1 (replication) or 'network' RAID-5/6 (erasure encoding). What do I mean by this? Imagine a RAID array but now also imagine that instead of the array consisting of hard drives, it consist of entire servers.&lt;/p&gt;
&lt;p&gt;That's what Ceph does: it distributes the data across multiple storage nodes and assures that the copy of a piece of data is never stored on the same storage node. &lt;/p&gt;
&lt;p&gt;This is what happens if a client writes two blocks of data:&lt;/p&gt;
&lt;p&gt;&lt;img alt="replication" src="https://louwrentius.com/static/images/cephreplication.png" /&gt;&lt;/p&gt;
&lt;p&gt;Notice how a copy of the data block is always replicated to other hardware.&lt;/p&gt;
&lt;p&gt;Ceph goes beyond the capabilities of regular RAID. You can configure more than one replica. You are not confined to RAID-1 with just one backup copy of your data&lt;sup id="fnref:fn2"&gt;&lt;a class="footnote-ref" href="#fn:fn2"&gt;2&lt;/a&gt;&lt;/sup&gt;. The only downside of storing more replicas is the storage cost. &lt;/p&gt;
&lt;p&gt;You may decide that data availability is so important that you may have to sacrifice space and absorb the cost. Because at scale, a simple RAID-1 replication scheme may not sufficiently cover the risk and impact of hardware failure anymore. What if two storage nodes in the cluster die? &lt;/p&gt;
&lt;p&gt;This example or consideration has nothing to do with Ceph, it's a reality you face when you operate at scale.&lt;/p&gt;
&lt;p&gt;RAID-1 or the Ceph equivalent 'replication' offers the best overall performance but as with 'regular' RAID-1, it is not very storage space efficient. Especially if you need more than one replica of the data to achieve the level of redundancy you need.&lt;/p&gt;
&lt;p&gt;This is why we used RAID-5 and RAID-6 in the past as an alternative to RAID-1 or RAID-10. Parity RAID assures redundancy but with much less storage overhead at the cost of storage performance (mostly write performance). Ceph uses 'erasure encoding' to achieve a similar result.&lt;/p&gt;
&lt;h3&gt;Erasure Encoding&lt;/h3&gt;
&lt;p&gt;With Ceph you are not confined to the limits of RAID-5/RAID-6 with just one or two 'redundant disks' (in Ceph's case storage nodes). Ceph allows you to use &lt;a href="https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/"&gt;&lt;em&gt;Erasure Encoding&lt;/em&gt;&lt;/a&gt;, a technique that let's you tell Ceph this: &lt;/p&gt;
&lt;p&gt;&lt;em&gt;"I want you to chop up my data in 8 data segments and 4 parity segments"&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="erasureencodig" src="https://louwrentius.com/static/images/erasureencoding.png" /&gt;&lt;/p&gt;
&lt;p&gt;These segments are then scattered across the storage nodes and this allows you to lose up to four entire hosts before you hit trouble. You will have only 33% storage overhead for redundancy instead of 50% (or even more) you may face using replication, depending on how many copies you want. &lt;/p&gt;
&lt;p&gt;This example does assume that you have at least 8 + 4 = 12 storage nodes. But any scheme will do, you could do 6 data segments + 2 parity segments (similar to RAID-6) with only 8 hosts. I think you catch the idea.&lt;/p&gt;
&lt;h2&gt;Ceph failure domains&lt;/h2&gt;
&lt;p&gt;Ceph is datacenter-aware. What do I mean by that? Well, the CRUSH map can represent your physical datacenter topology, consisting of racks, rows, rooms, floors, datacenters and so on. You can fully customise your topology.&lt;/p&gt;
&lt;p&gt;This allows you to create very clear data storage policies that Ceph will use to assure the cluster can tollerate failures across certain boundaries. &lt;/p&gt;
&lt;p&gt;An example of a topology:&lt;/p&gt;
&lt;p&gt;&lt;img alt="topology" src="https://louwrentius.com/static/images/cephtopology.png" /&gt;&lt;/p&gt;
&lt;p&gt;If you want, you can lose a whole rack. Or a whole row of racks and the cluster could still be fully operational, although with reduced performance and capacity.&lt;/p&gt;
&lt;p&gt;That much redundancy may cost so much storage that you may not want to employ it for all of your data. That's no problem. You can create multiple storage pools that each have their own protection level and thus cost.&lt;/p&gt;
&lt;h2&gt;How do you use Ceph?&lt;/h2&gt;
&lt;p&gt;Ceph at it's core is an object storage solution. Librados is the library you can include within your software project to access Ceph storage natively. There are Librados implementations for the following programming languages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;C(++)&lt;/li&gt;
&lt;li&gt;Java&lt;/li&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;PHP&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Many people are looking for more traditional storage solutions, like &lt;a href="http://docs.ceph.com/docs/mimic/rbd/"&gt;block storage&lt;/a&gt; for storing virtual machines, a POSIX compliant &lt;a href="http://docs.ceph.com/docs/mimic/cephfs/"&gt;shared file system&lt;/a&gt; or &lt;a href="http://docs.ceph.com/docs/mimic/radosgw/"&gt;S3/OpenStack Swift compatible&lt;/a&gt; object storage. &lt;/p&gt;
&lt;p&gt;Ceph provides &lt;em&gt;all those features&lt;/em&gt; in addition to it's native object storage format.&lt;/p&gt;
&lt;p&gt;I myself are mostly interested in block storage (Rados Block Device)(RBD) with the purpose of storing virtual machines. As Linux has native support for RBD, it makes total sense to use Ceph as a storage backend for OpenStack or plain KVM.&lt;/p&gt;
&lt;p&gt;With very recent versions of Ceph, native support for &lt;a href="http://docs.ceph.com/docs/mimic/rbd/iscsi-target-cli/"&gt;iSCSI&lt;/a&gt; has been added to expose block storage to non-native clients like VMware or Windows. For the record, I have no personal experience with this feature (yet).&lt;/p&gt;
&lt;h3&gt;The Object Storage Daemon (OSD)&lt;/h3&gt;
&lt;p&gt;In this section we zoom in a little bit more into the technical details of Ceph.&lt;/p&gt;
&lt;p&gt;If you read about Ceph, you read a lot about the OSD or object storage daemon. This is a service (daemon) that runs on the storage node. The OSD is the actual workhorse of Ceph, it serves the data from the hard drive or ingests it and stores it on the drive. The OSD also assures storage redunancy, by replicating data to other OSDs based on the CRUSH map.&lt;/p&gt;
&lt;p&gt;To be precise: for every hard drive or solid state drive in the storage node, an OSD will be active. Does your storage node have 24 hard drives? Then it runs 24 OSDs. &lt;/p&gt;
&lt;p&gt;And when a drive goes down, the OSD will go down too and the monitor nodes will redistribute an update CRUSH map so the clients are aware and know where to get the data. The OSDs also respond to this update, because redundancy is lost, they may start to replicate non-redundant data to make it redundant again (across fewer nodes).&lt;/p&gt;
&lt;p&gt;When the drive is replaced, the cluster will 'self-heal'. This means that the new drive will be filled with data once again to make sure data is spread evenly across all drives within the cluster.&lt;/p&gt;
&lt;p&gt;So maybe it's interesting to realise that storage clients effectively directly talk to the OSDs that in turn talk to the individual hard drives. There aren't many components between the client and the data itself. &lt;/p&gt;
&lt;p&gt;&lt;img alt="cephdiagram01" src="https://louwrentius.com/static/images/cephdiagram01.png" /&gt;&lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;I hope that this blog post has helped you understand how Ceph works and why it is so interesting. If you have any questions or feedback please feel free to comment or email me.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:fn1"&gt;
&lt;p&gt;If you have a ton of high-volume sequential data storage traffic, you should realise that a single host with a ton of drives can easily saturate 10Gbit or theoretically even 40Gbit. I'm assuming 150 MB/s per hard drive. With 36 hard drives you would face 5.4 GB/s. Even if you only would run half that speed, you would need to bond multiple 10Gbit interfaces to sustain this load. Imagine the requirements for your core network. But it really depends on your workload. You will never reach this kind of throughput with a ton of random I/O unless you are using SSDs, for instance.&amp;#160;&lt;a class="footnote-backref" href="#fnref:fn1" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:fn2"&gt;
&lt;p&gt;Please note that in production setups, it's the default to have a total of 3 instances of a data block. So that means 'the original' plus two extra copies. See also &lt;a href="http://docs.ceph.com/docs/jewel/rados/operations/pools/#set-the-number-of-object-replicas"&gt;this link&lt;/a&gt;. Thanks to sep76 from Reddit to &lt;a href="https://www.reddit.com/r/ceph/comments/98khhw/understanding_ceph_opensource_scalable_storage/e4ha7ss/"&gt;point out&lt;/a&gt; that the default is 3 instances of your data.&amp;#160;&lt;a class="footnote-backref" href="#fnref:fn2" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="Ceph"></category></entry><entry><title>Setup a VPN on your iPhone with OpenVPN and Linux</title><link href="https://louwrentius.com/setup-a-vpn-on-your-iphone-with-openvpn-and-linux.html" rel="alternate"></link><published>2018-06-18T04:00:00+02:00</published><updated>2018-06-18T04:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2018-06-18:/setup-a-vpn-on-your-iphone-with-openvpn-and-linux.html</id><summary type="html">&lt;p&gt;&lt;strong&gt;[Update 2018]&lt;/strong&gt; This article has been substantially updated since it was published in 2013.&lt;/p&gt;
&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;In this article, I will show you how to setup a Linux-based &lt;a href="https://openvpn.net/index.php/open-source.html"&gt;OpenVPN&lt;/a&gt; server. Once this server is up and running, I'll show you how to setup your iOS devices, such as your iPhone or …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;strong&gt;[Update 2018]&lt;/strong&gt; This article has been substantially updated since it was published in 2013.&lt;/p&gt;
&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;In this article, I will show you how to setup a Linux-based &lt;a href="https://openvpn.net/index.php/open-source.html"&gt;OpenVPN&lt;/a&gt; server. Once this server is up and running, I'll show you how to setup your iOS devices, such as your iPhone or iPad so that they can connect with your new VPN server.&lt;/p&gt;
&lt;p&gt;The goal of this effort is to encapsulate all internet traffic through your VPN connection so no matter where you are, nobody can monitor which sites you visit and what you do. This is ideal if you have to visit the internet through untrusted internet sources like public Wi-Fi.&lt;/p&gt;
&lt;p&gt;Some typical scenarios would be: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;you run an OpenVPN service on your Linux-based home router directly&lt;/li&gt;
&lt;li&gt;you run an OpenVPN service on a device behind your home router using portforwarding (like a Raspberry Pi)&lt;/li&gt;
&lt;li&gt;you run an OpenVPN service on a VPS hosted by one of many cloud service providers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Your iOS devices will be running &lt;a href="https://itunes.apple.com/us/app/openvpn-connect/id590379981"&gt;OpenVPN Connect&lt;/a&gt;, a free application found in the App store. &lt;/p&gt;
&lt;p&gt;&lt;img alt="screenshot" src="https://louwrentius.com/static/images/openvpn.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;A note on other platforms:&lt;/strong&gt; Although this tutorial is focussed on iOS devices, your new OpenVPN-based VPN server will support any client OS, may it be Windows, MacOS, Android or Linux. Configuration of these other clients is out-of-scope for this article.&lt;/p&gt;
&lt;p&gt;This tutorial is based on &lt;a href="https://openvpn.net/index.php/open-source.html"&gt;OpenVPN&lt;/a&gt;, an open-source product. The company behind OpenVPN also offers &lt;a href="https://www.privatetunnel.com"&gt;VPN services&lt;/a&gt; for a price per month. If you find the effort of setting up your own server too much of a hassle, you could look into &lt;a href="https://www.privatetunnel.com"&gt;their service&lt;/a&gt;. Please note that I have never used this service and cannot vouch for it.&lt;/p&gt;
&lt;p&gt;This is a brief overview of all the steps you will need to take in order to have a fully functional setup, including configuration of the clients:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Install a Linux server (out-of-scope)&lt;/li&gt;
&lt;li&gt;Install the OpenVPN software&lt;/li&gt;
&lt;li&gt;Setup the Certificate Authority&lt;/li&gt;
&lt;li&gt;Generate the server certificate&lt;/li&gt;
&lt;li&gt;Configure the OpenVPN server configuration&lt;/li&gt;
&lt;li&gt;Configure the firewall on your Linux server&lt;/li&gt;
&lt;li&gt;Generate certificates for every client (iPhone, iPad, and so on)&lt;/li&gt;
&lt;li&gt;Copy the client configuration to your devices&lt;/li&gt;
&lt;li&gt;Test your clients&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;How It Works&lt;/h3&gt;
&lt;p&gt;OpenVPN is an SSL-based VPN solution. SSL-based VPNs are very reliable because if you set it up properly, you will never be blocked by any firewall as long as TCP-port 443 is accessible. By default, OpenVPN uses UDP as a transport at port 1194, but you can switch to TCP-port 443 to increase the chance that your traffic will not be blocked at the cost of a little bit more bandwidth usage.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Authentication&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Authentication is based on public/private key cryptography. The OpenVPN server is similar to an HTTPS server. The biggest difference is that your device doesn't use a username/password combination for authentication, but a certificate. This certificate is stored within the client configuration file.&lt;/p&gt;
&lt;p&gt;So before you can configure and start your OpenVPN service, you need to setup a Certificate Authority (CA). With the CA you can create the server certificate for your OpenVPN server and after that's done, generate all client certificates.&lt;/p&gt;
&lt;h3&gt;OpenVPN installation&lt;/h3&gt;
&lt;p&gt;OpenVPN is available on most common Linux Distros by default. apt-get install openvpn for any Debian or Ubuntu version is all you need to install OpenVPN. &lt;/p&gt;
&lt;p&gt;Or take a look &lt;a href="http://openvpn.net/index.php/open-source/documentation/howto.html#install"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I have never tried it out, but you can try and take a look at an &lt;a href="https://github.com/Nyr/openvpn-install"&gt;OpenVPN install script&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;This script seems to automate a lot of steps, like firewall configuration, certificate generation, etc.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;It's out-of-scope for this tutorial, but you should make sure that you keep your OpenVPN software &lt;em&gt;up-to-date&lt;/em&gt;, in case security vulnerabilities are discovered in OpenVPN in the future.&lt;/p&gt;
&lt;h3&gt;Security&lt;/h3&gt;
&lt;p&gt;I'm creating this tutorial on an older system, with less secure default configuration settings for both the Certificate Authority as the OpenVPN server itself. The settings I use in this tutorial are based on the steps in &lt;a href="https://blog.g3rt.nl/openvpn-security-tips.html"&gt;this blog&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;Notable improvements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;AES256 for encryption&lt;/li&gt;
&lt;li&gt;2048 bit key sizes over 1024 bit keys&lt;/li&gt;
&lt;li&gt;SHA256 over sha1/md5&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Performance&lt;/h3&gt;
&lt;p&gt;I did some performance tests and got around 40-50 Mbs per iOS client.
I believe that the bottleneck lies with my old HP Microserver N40L with its relatively weak CPU.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Traffic Shaping&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you want to limit how much bandwidth a client is allowed to use, I recommend to use &lt;a href="https://serverfault.com/questions/777875/how-to-do-traffic-shaping-rate-limiting-with-tc-per-openvpn-client"&gt;this tutorial&lt;/a&gt;. I have tried it out and it works perfectly.&lt;/p&gt;
&lt;h3&gt;Creating a certificate authority.&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;For unbuntu&lt;/em&gt;: install the package "easy-rsa" and use the 'make-cadir' command instead of the setup instructions below. &lt;/p&gt;
&lt;p&gt;I assume that you will setup your OpenVPN configuration in /etc/openvpn.
Before you can setup the server configuration, you need to create a certificate authority. I used the folder /etc/openvpn/easy-rsa as the location for my CA. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mkdir /etc/openvpn/easy-rsa
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;We start with copying all these files to this new directory:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cp -R /usr/share/doc/openvpn/examples/easy-rsa/2.0* /etc/openvpn/easy-rsa
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please note that depending on your Linux flavour, these files may be found at some other path.&lt;/p&gt;
&lt;p&gt;Next, we cd into the destination directory. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cd /etc/openvpn/easy-rsa
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now, open the 'vars' file with your favorite text editor.
The following instructions are straight from the &lt;a href="http://openvpn.net/index.php/open-source/documentation/howto.html#pki"&gt;OpenVPN howto&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You should change all the values to ones that apply to you (obviously).&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;export KEY_COUNTRY=&amp;quot;US&amp;quot;
export KEY_PROVINCE=&amp;quot;California&amp;quot;
export KEY_CITY=&amp;quot;San Fransisco&amp;quot;
export KEY_ORG=&amp;quot;My Company&amp;quot;
export KEY_EMAIL=&amp;quot;my@mail.com&amp;quot;
export KEY_CN=server
export KEY_NAME=server
export KEY_OU=home
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Change the KEY_SIZE parameter:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;export KEY_SIZE=2048
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;How long would you like your certificates to be valid (10 years?)&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;export CA_EXPIRE=3650
export KEY_EXPIRE=3650
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then I had to copy openssl-1.0.0.cnf to openssl.cnf because the 'vars' script complained that it couldn't find the latter file.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cp openssl-1.0.0.cnf openssl.cnf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Notice&lt;/strong&gt; I went through these steps on an older Linux installation. I had to edit the file &lt;em&gt;/etc/openvpn/easy-rsa/pkitool&lt;/em&gt; and changed all occurrences of &lt;em&gt;'sha1'&lt;/em&gt; to &lt;em&gt;'sha256'&lt;/em&gt;. &lt;/p&gt;
&lt;p&gt;Now we 'source' var and run two additional commands that actually generate the certificate authority. Notice the dot before ./vars.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;. ./vars
./clean-all
./build-ca
./build-dh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You will have to confirm the values or change them if necessary.&lt;/p&gt;
&lt;p&gt;Now we have a certificate authority and we can create new certificates that will be signed by this authority. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;WARNING:&lt;/strong&gt; be extremely careful with all key files, they should be kept private.&lt;/p&gt;
&lt;p&gt;I would recommend performing these commands:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;chown -R root:root /etc/openvpn 
chmod -R 700 /etc/openvpn
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;By default, OpenVPN runs as root. With these commands, only the root user will be able to access the keys. If you don't run OpenVPN as root, you must select the appropriate user for the first command. See also &lt;a href="https://community.openvpn.net/openvpn/wiki/UnprivilegedUser"&gt;this article&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Creating the Server Certificate&lt;/h3&gt;
&lt;p&gt;We create the server certificate:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;./build-key-server server
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It's up to you to come up with an alternative for 'server'. This is the file name under which the key files and certificates are stored. &lt;/p&gt;
&lt;p&gt;All files that are generated can be found in the '/etc/openvpn/easy-rsa/keys' directory. This is just a flat folder with both the server and client keys.&lt;/p&gt;
&lt;h3&gt;Creating the optional TLS-AUTH Certificate&lt;/h3&gt;
&lt;p&gt;This step is optional but it doesn't take much effort and it seems to add an additional security layer at no significant cost. In this step we create an additional secret key that is shared with both the server and the clients.&lt;/p&gt;
&lt;p&gt;The following steps are based on &lt;a href="https://community.openvpn.net/openvpn/wiki/Hardening"&gt;this article&lt;/a&gt; (use of -tls-auth).&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cd /etc/openvpn/easy-rsa/keys
openvpn --genkey --secret ta.key
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;When we are going to create the server configuration, we will reference this key file.&lt;/p&gt;
&lt;h3&gt;Creating the Client Certificate&lt;/h3&gt;
&lt;p&gt;Now that we have a server certificate, we are going to create a certificate for our iPhone (or any other iOS device).&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;./build-key iphone
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Answer the questions with the defaults. Don't forget to answer these questions:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Sign the certificate? [y/n]:y
1 out of 1 certificate requests certified, commit? [y/n]y
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So now we have everything in place to start creating an OpenVPN configuration.
We must create a configuration for the server and the client. Those configurations are based on the examples that can be find in /usr/share/doc/openvpn/examples/.&lt;/p&gt;
&lt;h3&gt;Example Server configuration&lt;/h3&gt;
&lt;p&gt;This is my server configuration which is operational. It is stored in /etc/openvpn/openvpn.conf&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;dev tun2
tls-server
cipher AES-256-CBC
auth SHA256
remote-cert-tls client
dh easy-rsa/keys/dh2048pem
ca easy-rsa/keys/ca.crt
cert easy-rsa/keys/server.crt
key easy-rsa/keys/server.key
tls-auth easy-rsa/keys/ta.key
server 10.0.0.0 255.255.255.0
log /var/log/openvpn.log
script-security 2
route-up &amp;quot;/sbin/ifconfig tun2 up&amp;quot;
port 443
proto tcp-server
push &amp;quot;redirect-gateway def1 bypass-dhcp&amp;quot;
push &amp;quot;dhcp-option DNS 8.8.8.8&amp;quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I believe you should be able to use this configuration as-is. Depending on your local IP-addresses within your own network, you may have to change the &lt;em&gt;server&lt;/em&gt; section.&lt;/p&gt;
&lt;p&gt;I use TCP-port 443 as this destination port is almost never blocked as blocking this port would break most internet connectivity. (The downside is that I can no longer host any secure web site on this IP-address). &lt;/p&gt;
&lt;p&gt;The OpenVPN service will provide your client with an IP-address within the address range configured in the 'server' section. &lt;/p&gt;
&lt;p&gt;Change any parameters if required and then start or restart the OpenVPN service:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/etc/init.d/openvpn restart
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Make sure that the server is running properly in /var/log/openvpn.log&lt;/p&gt;
&lt;p&gt;If you want to use your VPN to browse the internet, we still need to configure a basic firewall setup. &lt;/p&gt;
&lt;p&gt;I'm assuming that you already have some kind of IPtables-based firewall running. 
Configuring a Linux firewall is out-of-scope for this article. I will only discuss the changes you may need to make for the OpenVPN service to operate properly.&lt;/p&gt;
&lt;p&gt;You will need to accept traffic to TCP port 443 on the interface connected to the internet. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If your OpenVPN server is behind a router/firewall, you need to configure port-forwarding on that router/firewall. How to do so is out-of-scope for this article, as it is different for different devices.&lt;/p&gt;
&lt;p&gt;Assuming that you will - for example - use the 10.0.0.0/24 network for VPN clients such as your iPhone, you must also create a NAT rule so VPN clients can use the IP-address of the Linux server to access Internet. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;iptables -t nat -A POSTROUTING -s &amp;quot;10.0.0.0/24&amp;quot; -o &amp;quot;eth0&amp;quot; -j MASQUERADE
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please note that you must change eth0 with the name of the appropriate interface that connects to the internet. Change the IP-address range according to your own situation. It should not conflict with your existing network.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;iptables -A FORWARD -p tcp -s 10.0.0.0/24 -d 0.0.0.0/0 -j ACCEPT
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please note that I haven't tested these rules, as I have a different setup. But this should be sufficient. And make sure that forwarding is enabled like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;echo 1 &amp;gt; /proc/sys/net/ipv4/ip_forward
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Example Client configuration&lt;/h3&gt;
&lt;p&gt;Most OpenVPN clients can automatically import files with the .ovpn file extension. A typical configuration file is something like 'iphone.ovpn'. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Warning:&lt;/strong&gt; the .ovpn files will contain the certificate used by your iPhone/iPad to authenticate against your OpenVPN server. Be very carefull where you store this file. Anyone that is able to obtain a copy of this file, will be able to connect to your VPN server.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://gist.github.com/renatolfc/f6c9e2a5bd6503005676"&gt;This is an example configuration file&lt;/a&gt;, but we are not going to create it by hand, it's too much work.&lt;/p&gt;
&lt;p&gt;What you will notice from this example is that the .ovpn file contains both the client configuration and all the required certificates:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;the CA root certificate&lt;/li&gt;
&lt;li&gt;the server certificate to validate the server&lt;/li&gt;
&lt;li&gt;the client private certificate &lt;/li&gt;
&lt;li&gt;the TLS-AUTH certificate (an optional extra security measure)&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Create a client configuration file (.ovpn) with a script&lt;/h3&gt;
&lt;p&gt;You can create your client configuration file manually but that is a lot of work. Because you need to append all the certificates to a single file, that also contains the configuration settings. &lt;/p&gt;
&lt;p&gt;So we will use a script to setup the client configuration. &lt;/p&gt;
&lt;p&gt;First we are going to create a folder where our client configuration files will be stored.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mkdir /etc/openvpn/clientconfig
chmod 700 /etc/openvpn/clientconfig
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now we will download the script and the accompanying configuration template file. Notice that the links may wrap.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cd /etc/openvpn
wget https://raw.githubusercontent.com/louwrentius/openvpntutorial/master/create-client-config.sh
wget https://raw.githubusercontent.com/louwrentius/openvpntutorial/master/client-config-template
chmod +x create-client-config.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please note that you first need to create the certificates for your devices before you can generate a configuration file. So please go back to that step if you need to.&lt;/p&gt;
&lt;p&gt;Also take note of the name you have used for your devices. You can always take a look in /etc/openvpn/easy-rsa/keys to see how your devices are called.&lt;/p&gt;
&lt;p&gt;Now, edit this client-config-template and change the appropriate values where required. You may probably only need to change the first line:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;remote &amp;lt;your server DNS address or IP address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now you are ready to run the script and generate the config file for your device.&lt;/p&gt;
&lt;p&gt;When you run this script, a configuration file is generated and placed in to the folder /etc/openvpn/clientconfig. &lt;/p&gt;
&lt;p&gt;The script just puts the client configuration template and all required certificates in one file. This is how you use it:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;./create-client-config.sh iPhone
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Some output you will notice when running the script:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;user@server:/etc/openvpn# ./create-client-config.sh iphone
Client&amp;#39;s cert found: /etc/openvpn/easy-rsa/keys/iphone
Client&amp;#39;s Private Key found: /etc/openvpn/easy-rsa/keys/iphone.key
CA public Key found: /etc/openvpn/easy-rsa/keys/ca.crt
tls-auth Private Key found: /etc/openvpn/easy-rsa/keys/ta.key
Done! /etc/openvpn/clientconfig/iphone.ovpn Successfully Created.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You should now find a file called 'iphone.ovpn' in the directory /etc/openvpn/clientconfig.&lt;/p&gt;
&lt;p&gt;We are almost there. We just need to copy this file to your iOS device. &lt;/p&gt;
&lt;p&gt;You have three options:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Use iCloud Drive&lt;/li&gt;
&lt;li&gt;Use iTunes&lt;/li&gt;
&lt;li&gt;Use email (obviously insecure and not discussed)&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Setting up your iPhone or iPad with iCloud Drive&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;First install the &lt;a href="https://itunes.apple.com/us/app/openvpn-connect/id590379981"&gt;OpenVPN Connect&lt;/a&gt; application if you haven't done so.&lt;/li&gt;
&lt;li&gt;Copy the .ovpn file from your OpenVPN server to your iCloud Drive.&lt;/li&gt;
&lt;li&gt;Take your device and use the 'files' browser to navigate within your iCloud drive to the .ovpn file you just copied.&lt;/li&gt;
&lt;li&gt;Tap on the file to download and open it.&lt;/li&gt;
&lt;li&gt;Now comes the tricky part: press the share symbol&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img alt="step 1" src="https://louwrentius.com/static/images/openvpnstep1.png" /&gt;&lt;/p&gt;
&lt;p&gt;Open the file with the OpenVPN application on your iOS device:&lt;/p&gt;
&lt;p&gt;&lt;img alt="step 2" src="https://louwrentius.com/static/images/openvpnstep2.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="step 3" src="https://louwrentius.com/static/images/openvpnstep3.png" /&gt;&lt;/p&gt;
&lt;p&gt;When you get the question "OpenVPN would like to Add VPN Configurations", choose 'Allow'. &lt;/p&gt;
&lt;p&gt;Continue with the step 'Test your iOS device'.&lt;/p&gt;
&lt;p&gt;If the OpenVPN Connect client doesn't import the file, remove the application from the device and re-install it. (This is what I had to do on my iPad).&lt;/p&gt;
&lt;h3&gt;Setting up your iPhone or iPad with iTunes&lt;/h3&gt;
&lt;p&gt;You can skip this step if you used iCloud Drive to copy the .ovpn profile to your device. &lt;/p&gt;
&lt;p&gt;You need to get the following files on your iOS device:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;iphone.ovpn
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Copy this file from your OpenVPN server to the computer running iTunes.
Then connect your device to iTunes with a cable.  &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Open iTunes&lt;/li&gt;
&lt;li&gt;Select your device at the top right&lt;/li&gt;
&lt;li&gt;Go to the Apps tab&lt;/li&gt;
&lt;li&gt;Scroll to the file sharing section&lt;/li&gt;
&lt;li&gt;Select the OpenVPN application&lt;/li&gt;
&lt;li&gt;Add the iphone.ovpn &lt;/li&gt;
&lt;li&gt;Sync your device&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Test your iOS device&lt;/h3&gt;
&lt;p&gt;Open the OpenVPN client. You will see a notice that a new configuration has been
imported and you need to accept this configuration. &lt;/p&gt;
&lt;p&gt;As it might not work straight away, you need to monitor /var/log/openvpn.log on the server to watch for any errors. &lt;/p&gt;
&lt;p&gt;Now try to connect and enjoy.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;You should be able to keep your VPN enabled at all times because battery usage overhead should be minimal. If you are unable to connect to your VPN when you are at home behind your own firewall, you need to check your firewall settings.&lt;/p&gt;
&lt;p&gt;Updated 20130123 with keepalive option.
Updated 20130801 with extra server push options for traffic redirection and DNS configuration
Updated 20180618 as substantial rewrite of the original outdated article.&lt;/p&gt;</content><category term="Networking"></category></entry><entry><title>HP Proliant Microserver Gen10 as router or NAS</title><link href="https://louwrentius.com/hp-proliant-microserver-gen10-as-router-or-nas.html" rel="alternate"></link><published>2017-09-14T12:00:00+02:00</published><updated>2017-09-14T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2017-09-14:/hp-proliant-microserver-gen10-as-router-or-nas.html</id><summary type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;In the summer of 2017, HP released the &lt;a href="https://www.hpe.com/h20195/v2/GetDocument.aspx?docname=a00008701enw&amp;amp;doctype=quickspecs&amp;amp;doclang=EN_US&amp;amp;searchquery=&amp;amp;cc=us&amp;amp;lc=en"&gt;Proliant Microserver Gen10&lt;/a&gt;. This machine replaces the older Gen8 model.&lt;/p&gt;
&lt;p&gt;&lt;img alt="gen10" src="https://louwrentius.com/static/images/gen10.png" /&gt;&lt;/p&gt;
&lt;p&gt;For hobbyists, the Microserver always has been an interesting device for a custom home NAS build or as a router. &lt;/p&gt;
&lt;p&gt;Let's find out if this is still the case. &lt;/p&gt;
&lt;h3&gt;Price&lt;/h3&gt;
&lt;p&gt;In …&lt;/p&gt;</summary><content type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;In the summer of 2017, HP released the &lt;a href="https://www.hpe.com/h20195/v2/GetDocument.aspx?docname=a00008701enw&amp;amp;doctype=quickspecs&amp;amp;doclang=EN_US&amp;amp;searchquery=&amp;amp;cc=us&amp;amp;lc=en"&gt;Proliant Microserver Gen10&lt;/a&gt;. This machine replaces the older Gen8 model.&lt;/p&gt;
&lt;p&gt;&lt;img alt="gen10" src="https://louwrentius.com/static/images/gen10.png" /&gt;&lt;/p&gt;
&lt;p&gt;For hobbyists, the Microserver always has been an interesting device for a custom home NAS build or as a router. &lt;/p&gt;
&lt;p&gt;Let's find out if this is still the case. &lt;/p&gt;
&lt;h3&gt;Price&lt;/h3&gt;
&lt;p&gt;In The Netherlands, the price of the entry-level model is similar to the Gen8: around €220 including taxes. &lt;/p&gt;
&lt;h3&gt;CPU&lt;/h3&gt;
&lt;p&gt;The new AMD X3216 processor has slightly better single threaded performance &lt;a href="https://www.cpubenchmark.net/compare.php?cmp[]=2075&amp;amp;cmp[]=3069"&gt;as compared to&lt;/a&gt; the older G1610t in the Gen8. Overall, both devices seem to have similar CPU performance.&lt;/p&gt;
&lt;p&gt;The biggest difference is the TDP: 35 Watt for the Celeron vs 15 Watt for the AMD CPU. &lt;/p&gt;
&lt;h3&gt;Memory&lt;/h3&gt;
&lt;p&gt;By default, it has 8 GB of unbuffered ECC memory, that's 4 GB more than the old model. Only one of the two memory slots is occupied, so you can double that amount just by adding another 8 GB stick. It seems that 32 GB is the maximum. &lt;/p&gt;
&lt;h3&gt;Storage&lt;/h3&gt;
&lt;p&gt;This machine has retained the four 3.5" drive slots. There are no drive brackets anymore. Before inserting a hard drive, you need to remove a bunch of screws from the front of the chassis and put four of them in the mounting holes of each drive. These screws then guide the drive through grooves into the drive slot. This caddy-less design works perfectly and the drive is mounted rock-solid in it's position.&lt;/p&gt;
&lt;p&gt;To pop a drive out, you have to press the appropriate blue lever, which latches on to one of the front screws mounted on your drive and pulls it out of the slot.&lt;/p&gt;
&lt;p&gt;There are two on-board sata controllers. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="mo"&gt;00&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;11.0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SATA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;controller&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Advanced&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Micro&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Devices&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Inc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;AMD&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;FCH&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SATA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Controller&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;AHCI&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rev&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;49&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="mo"&gt;01&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;00.0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SATA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;controller&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Marvell&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Technology&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Group&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Ltd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;88&lt;/span&gt;&lt;span class="n"&gt;SE9230&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PCIe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;SATA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="n"&gt;Gb&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Controller&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rev&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The Marvell controller is connected to the four drive bays. The AMD controller is probably connected to the fifth on-board SATA port.&lt;/p&gt;
&lt;p&gt;As with the Gen8, you need a floppy-power-connector-to-sata-power-connector cable if you want to use a SATA drive with the fifth onboard SATA port.&lt;/p&gt;
&lt;p&gt;Due to the internal SATA header or the USB2.0 header, you could decide to run the OS without redundancy and use all four drive bays for storage. As solid state drives tend to be very reliable, you may use a small SSD to keep the cost and power usage down and still retain reliability (although not the level of reliability RAID1 provides). &lt;/p&gt;
&lt;h3&gt;Networking&lt;/h3&gt;
&lt;p&gt;Just as the Gen8, the Gen10 has two Gigabit network cards. The brand and model is: Broadcom Limited NetXtreme BCM5720&lt;/p&gt;
&lt;p&gt;As tested with &lt;em&gt;iperf3&lt;/em&gt; I get full 1 Gbit network performance. No problems here (tested on CentOS 7).&lt;/p&gt;
&lt;h3&gt;PCIe slots&lt;/h3&gt;
&lt;p&gt;This model has two half-height PCIe slots (1x and 8x in a 4x and 8x physical slot) which is an improvement over the single PCIe slot in the Gen8. &lt;/p&gt;
&lt;h3&gt;USB&lt;/h3&gt;
&lt;p&gt;The USB configuration is similar to the Gen8, with both USB2 and USB3 ports and one internal USB2 header on the motherboard. &lt;/p&gt;
&lt;p&gt;Sidenote: the onboard micro SD card slot as found in the Gen8 is &lt;em&gt;not&lt;/em&gt; present in the Gen10.&lt;/p&gt;
&lt;h3&gt;Graphics&lt;/h3&gt;
&lt;p&gt;The Gen10 has also a GPU build-in but I have not looked into it as I have no use for it. &lt;/p&gt;
&lt;p&gt;The Gen10 differs in output options as compared to the Gen8: it supports one VGA and two displayport connections. Those displayport connectors could make the Gen10 an interesting DIY HTPC build, but I have not looked into it.&lt;/p&gt;
&lt;h3&gt;iLO&lt;/h3&gt;
&lt;p&gt;The Gen10 has &lt;strong&gt;no&lt;/strong&gt; support for iLO. So no remote management, unless you have an external KVM-over-IP solution.&lt;/p&gt;
&lt;p&gt;This is a downside, but for home users, this is probably not a big deal. My old Microserver N40L didn't have iLO and it never bothered me. &lt;/p&gt;
&lt;p&gt;And most of all: iLO is a small on-board mini-comuter that increases idle power consumption. So the lack of iLO support should mean better idle power consumption.&lt;/p&gt;
&lt;h3&gt;Boot&lt;/h3&gt;
&lt;p&gt;Both Legacy and UEFI boot is supported. I have not tried UEFI booting. &lt;/p&gt;
&lt;p&gt;Booting from the 5th internal SATA header is supported and works fine (as opposed to the Gen8). &lt;/p&gt;
&lt;p&gt;For those who care: booting is a lot quicker as opposed to the Gen8, which took ages to get through the BIOS.&lt;/p&gt;
&lt;h3&gt;Power Usage&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;I have updated this segment as I have used some incorrect information in the original article.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Gen10 seems to consume &lt;em&gt;14 Watt at idle&lt;/em&gt;, booted into Centos 7 &lt;em&gt;without any disk drives attached&lt;/em&gt; (removed all drives after booting). This 14 Watt figure is reported by my external power meter.&lt;/p&gt;
&lt;p&gt;Adding a single old 7200 1 TB drive drives power usage up to 21 Watt (as expected). &lt;/p&gt;
&lt;p&gt;With four older 7200 RPM drives the entire system uses about 43 Watt according to the external power meter. &lt;/p&gt;
&lt;p&gt;As an experiment, I've put two old 60 GB 2.5" laptop drives in the first two slots, configured as RAID1. Then I added two 1 TB 7200 RPM drives to fill up the remaining slots. This resulted in a power usage of 32 Watt.&lt;/p&gt;
&lt;h3&gt;Dimensions and exterior&lt;/h3&gt;
&lt;p&gt;Exactly the same as the Gen8, they stack perfectly. &lt;/p&gt;
&lt;p&gt;The Gen8 had a front door protecting the drive bays connected to the chassis with two hinges. HP has been cheap on the Gen10, so when you open the door, it basically falls off, there's no hinge. It's not a big issue, the overall build quality of the Gen10 is excellent.&lt;/p&gt;
&lt;p&gt;I have no objective measurements of noise levels, but the device seems almost silent to me.&lt;/p&gt;
&lt;h3&gt;Evaluation and conclusion&lt;/h3&gt;
&lt;p&gt;At first, I was a bit disappointed about the lack of iLO, but it turned out for the best. What makes the Gen10 so interesting is the idle power consumption. The lack of iLO support probably contributes to the improved idle power consumption.&lt;/p&gt;
&lt;p&gt;The Gen8 measures between 30 and 35 Watt idle power consumption, so the Gen10 does fare much better (~18 Watt). &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Firewall/Router&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;At this level of power consumption, the Gen10 could be a formidable router/firewall solution. The only real downside is it's size as compared to purpose-built firewalls/routers. The two network interfaces may provide sufficient network connectivity but if you need more ports and using VLANs is not enough, it's easy to add some extra ports. &lt;/p&gt;
&lt;p&gt;If an ancient N40L with a piss-poor Atom processor can handle a 500 Mbit internet connection, this device will have no problems with it, I'd presume. Once I've taken this device into production as a replacement for my existing router/firewall, I will share my experience.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Storage / NAS&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Gen8 and Gen10 both have four SATA drive bays and a fifth internal SATA header. From this perspective, nothing has changed. The reduced idle power consumption could make the Gen10 an even more attractive option for a DIY home grown NAS. &lt;/p&gt;
&lt;p&gt;All things considered I think the Gen10 is a great device and I have not really encountered any downsides. If you have no problems putting a bit of effort into a DIY solution, the Gen10 is a great platform for a NAS or Router/Firewall, that can compete with most purpose-build devices.&lt;/p&gt;
&lt;p&gt;I may update this article as I gain more experience with this device.&lt;/p&gt;</content><category term="Hardware"></category><category term="Storage"></category><category term="Networking"></category></entry><entry><title>Using InfiniBand for cheap and fast point-to-point Networking</title><link href="https://louwrentius.com/using-infiniband-for-cheap-and-fast-point-to-point-networking.html" rel="alternate"></link><published>2017-03-25T12:00:00+01:00</published><updated>2017-03-25T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2017-03-25:/using-infiniband-for-cheap-and-fast-point-to-point-networking.html</id><summary type="html">&lt;p&gt;InfiniBand networking is quite awesome. It's mainly used for two reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;low latency&lt;/li&gt;
&lt;li&gt;high bandwidth&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As a &lt;em&gt;home user&lt;/em&gt;, I'm mainly interested in setting up a high bandwidth link between two servers. &lt;/p&gt;
&lt;p&gt;I was using quad-port network cards with &lt;a href="https://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html"&gt;Linux Bonding&lt;/a&gt;, but this solution has some downsides:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;you can only …&lt;/li&gt;&lt;/ol&gt;</summary><content type="html">&lt;p&gt;InfiniBand networking is quite awesome. It's mainly used for two reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;low latency&lt;/li&gt;
&lt;li&gt;high bandwidth&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As a &lt;em&gt;home user&lt;/em&gt;, I'm mainly interested in setting up a high bandwidth link between two servers. &lt;/p&gt;
&lt;p&gt;I was using quad-port network cards with &lt;a href="https://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html"&gt;Linux Bonding&lt;/a&gt;, but this solution has some downsides:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;you can only go to 4 Gbit with Linux bonding (or you need more ports)&lt;/li&gt;
&lt;li&gt;you need a lot of cabling&lt;/li&gt;
&lt;li&gt;it is similar in price as InfiniBand&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So I've decided to take a gamble on some InfiniBand gear. You only need InfiniBand PCIe network cards and a cable. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;1 x SFF-8470 CX4 cable                                              $16
2 x MELLANOX DUAL-PORT INFINIBAND HOST CHANNEL ADAPTER MHGA28-XTC   $25
                                                            Total:  $66
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;img alt="view of installed infiniband card and cable" src="https://louwrentius.com/images/nano/backside.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I find $66 quite cheap for &lt;strong&gt;20 Gbit&lt;/strong&gt; networking. Regular 10Gbit Ethernet networking is often still more expensive that using older InfiniBand cards.&lt;/p&gt;
&lt;p&gt;InfiniBand is similar to Ethernet, you can run your own protocol over it (for lower latency) but you can use IP over InfiniBand. The InfiniBand card will just show up as a regular network device (one per port). &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ib0 Link encap:UNSPEC HWaddr 80-00-04-04-FE-80-00-00-00-00-00-00-00-00-00-00  
      inet addr:10.0.2.3  Bcast:10.0.2.255  Mask:255.255.255.0
      inet6 addr: fe80::202:c902:29:8e01/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:65520  Metric:1
      RX packets:7988691 errors:0 dropped:0 overruns:0 frame:0
      TX packets:17853128 errors:0 dropped:10 overruns:0 carrier:0
      collisions:0 txqueuelen:256 
      RX bytes:590717840 (563.3 MiB)  TX bytes:1074521257501 (1000.7 GiB)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Configuration&lt;/h2&gt;
&lt;p&gt;I've followed &lt;a href="https://pkg-ofed.alioth.debian.org/howto/infiniband-howto-4.html"&gt;these&lt;/a&gt; instructions to get IP over InfiniBand working.&lt;/p&gt;
&lt;h3&gt;Modules&lt;/h3&gt;
&lt;p&gt;First, you need to assure the following modules are loaded at a minimum:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ib_mthca
ib_ipoib
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I only had to add the ib_ipoib module to /etc/modules. As soon as this module is loaded, you will notice you have some ibX interfaces available which can be configured like regular ethernet cards&lt;/p&gt;
&lt;h3&gt;Subnet manager&lt;/h3&gt;
&lt;p&gt;In addition to loading the modules, you also need a &lt;em&gt;subnet manager&lt;/em&gt;. You just need to install it like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get install opensm
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This service needs to run on just one of the endpoints.&lt;/p&gt;
&lt;h3&gt;Link status&lt;/h3&gt;
&lt;p&gt;if you want you can check the link status of your InfiniBand connection like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;# ibstat
CA &amp;#39;mthca0&amp;#39;
    CA type: MT25208
    Number of ports: 2
    Firmware version: 5.3.0
    Hardware version: 20
    Node GUID: 0x0002c90200298e00
    System image GUID: 0x0002c90200298e03
    Port 1:
        State: Active
        Physical state: LinkUp
        Rate: 20
        Base lid: 1
        LMC: 0
        SM lid: 2
        Capability mask: 0x02510a68
        Port GUID: 0x0002c90200298e01
        Link layer: InfiniBand
    Port 2:
        State: Down
        Physical state: Polling
        Rate: 10
        Base lid: 0
        LMC: 0
        SM lid: 0
        Capability mask: 0x02510a68
        Port GUID: 0x0002c90200298e02
        Link layer: InfiniBand
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Set mode and MTU&lt;/h3&gt;
&lt;p&gt;Since my systems run Debian Linux, I've configured /etc/network/interfaces like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;auto ib0
iface ib0 inet static
    address 10.0.2.2
    netmask 255.255.255.0
    mtu 65520
    pre-up echo connected &amp;gt; /sys/class/net/ib0/mode
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please take note of the 'mode' setting. The 'datagram' mode gave abysmal network performance (&amp;lt; Gigabit). The 'connected' mode made everything perform acceptable. &lt;/p&gt;
&lt;p&gt;The MTU setting of 65520 improved performance by another 30 percent.&lt;/p&gt;
&lt;h2&gt;Performance&lt;/h2&gt;
&lt;p&gt;I've tested the card on two systems based on the Supermicro X9SCM-F motherboard.
Using these systems, I was able to achieve file transfer speeds up to 750 MB (Megabytes) per second or about 6.5 Gbit as measured with iperf.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;~# iperf -c 10.0.2.2
------------------------------------------------------------
Client connecting to 10.0.2.2, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 10.0.2.3 port 40098 connected with 10.0.2.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  7.49 GBytes  6.43 Gbits/sec
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Similar test with netcat and dd:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;~# dd if=/dev/zero bs=1M count=100000 | nc 10.0.2.2 1234
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 128.882 s, 814 MB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Testing was done on Debian Jessie. &lt;/p&gt;
&lt;p&gt;During earlier testing, I've also used these cards in HP Micro Proliant G8 servers. On those servers, I was running Ubuntu 16.04 LTS. &lt;/p&gt;
&lt;p&gt;As tested on Ubuntu with the HP Microserver:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;------------------------------------------------------------
Client connecting to 10.0.4.3, TCP port 5001
TCP window size: 4.00 MByte (default)
------------------------------------------------------------
[  5] local 10.0.4.1 port 52572 connected with 10.0.4.3 port 5001
[  4] local 10.0.4.1 port 5001 connected with 10.0.4.3 port 44124
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-60.0 sec  71.9 GBytes  10.3 Gbits/sec
[  4]  0.0-60.0 sec  72.2 GBytes  10.3 Gbits/sec
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Using these systems, I was able eventually able to achieve 15 Gbit as measured with iperf, although I have no 'console screenshot' from it. &lt;/p&gt;
&lt;h2&gt;Closing words&lt;/h2&gt;
&lt;p&gt;IP over InfiniBand seems to be a nice way to get high-performance networking on the cheap. The main downside is that when using IP over IB, CPU usage will be high. &lt;/p&gt;
&lt;p&gt;Another thing I have not researched, but could be of interest is running NFS or other protocols directly over InfiniBand using RDMA, so you would bypass the overhead of IP.&lt;/p&gt;</content><category term="Networking"></category></entry><entry><title>Tracking down a faulty Storage Array Controller with ZFS</title><link href="https://louwrentius.com/tracking-down-a-faulty-storage-array-controller-with-zfs.html" rel="alternate"></link><published>2016-12-15T12:00:00+01:00</published><updated>2016-12-15T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2016-12-15:/tracking-down-a-faulty-storage-array-controller-with-zfs.html</id><summary type="html">&lt;p&gt;One day, I lost two virtual machines on our DR environment after a storage vMotion.&lt;/p&gt;
&lt;p&gt;Further investigation uncovered that any storage vMotion of a virtual machine residing on our DR storage array would corrupt the virtual machine's disks.&lt;/p&gt;
&lt;p&gt;I could easily restore the affected virtual machines from backup and once …&lt;/p&gt;</summary><content type="html">&lt;p&gt;One day, I lost two virtual machines on our DR environment after a storage vMotion.&lt;/p&gt;
&lt;p&gt;Further investigation uncovered that any storage vMotion of a virtual machine residing on our DR storage array would corrupt the virtual machine's disks.&lt;/p&gt;
&lt;p&gt;I could easily restore the affected virtual machines from backup and once that was done, continued my investigation. &lt;/p&gt;
&lt;p&gt;I needed a way to quickly verifying if a virtual hard drive of a virtual machine was corrupted after a storage vMotion to understand what the pattern was. &lt;/p&gt;
&lt;p&gt;First, I created a virtual machine based on Linux and installed ZFS. Then, I attached a second disk of about 50 gigabytes and formatted this drive with ZFS. Once I filled the drive using 'dd' to about 40 gigabytes I was ready to test.&lt;/p&gt;
&lt;p&gt;ZFS was chosen for testing purposes because it stores hashes of all blocks of data. This makes it very simple to quickly detect any data corruption. If the hash doesn't match the hash generated from the data, you just detected corruption. &lt;/p&gt;
&lt;p&gt;Other file systems don't store hashes and don't check for data corruption so they just trust the storage layer. It may take a while before you find out that data is corrupted. &lt;/p&gt;
&lt;p&gt;I performed a storage vMotion of this secondary disk towards different datastores and then ran a 'zfs scrub' to track down any corruption. This worked better than expected: the scrub command would hang if the drive was corrupted by the storage vMotion. The test virtual machine required a reboot and a reformat of the secondary hard drive with ZFS as the previous file system, including data got corrupted.&lt;/p&gt;
&lt;p&gt;After performing a storage vMotion on the drive in different directions, from different datastores to other datastores slowly a pattern emerged.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Storage vMotion corruption happened independent of the VMware ESXi host used.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;a Storage vMotion never caused any issues when the disk was residing on our production storage array.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the corruption only happened when the virtual machine was stored on particular datastores on our DR storage array.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now it got really 'interesting'. The thing is that our DR storage array has two separate storage controllers running in active-active mode. However, the LUNs are always owned by a particular controller. Although the other controller can take over from the controller who 'owns' the LUNs in case of a failure, the owner will process the I/O when everything is fine. Particular LUNs are thus handled by a particular controller. &lt;/p&gt;
&lt;p&gt;So first I made a table where I listed the controllers and the LUNs it had ownership over, like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;            Owner       
Controller      a               b
            LUN001          LUN002
            LUN003          LUN004
            LUN005          LUN006
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then I started to perform Storage vMotions of the ZFS disk from one LUN to the other. After performing several test, the pattern became quite obvious.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;            LUN001  -&amp;gt;  LUN002  =   BAD
            LUN001  -&amp;gt;  LUN004  =   BAD
            LUN004  -&amp;gt;  LUN003  =   BAD
            LUN003  -&amp;gt;  LUN005  =   GOOD
            LUN005  -&amp;gt;  LUN001  =   GOOD
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I continued to test some additional permutations but it became clear that only LUNs owned by controller b caused problems. &lt;/p&gt;
&lt;p&gt;With the evidence in hand, I managed to convince our vendor support to replace storage controller b and that indeed resolved the problem. Data corruption due to a Storage vMotion never occurred after the controller was replaced. &lt;/p&gt;
&lt;p&gt;There is no need to name/shame the vendor in this regard. The thing is that all equipment can fail and what can happen will happen. What really counts is: are you prepared? &lt;/p&gt;</content><category term="Storage"></category><category term="ZFS"></category></entry><entry><title>RAID 5 is perfectly fine for home usage</title><link href="https://louwrentius.com/raid-5-is-perfectly-fine-for-home-usage.html" rel="alternate"></link><published>2016-09-08T12:00:00+02:00</published><updated>2016-09-08T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2016-09-08:/raid-5-is-perfectly-fine-for-home-usage.html</id><summary type="html">&lt;p&gt;RAID 5 gets a lot of flak these days. You either run RAID 1, RAID 10 or you use RAID 6, but if you run RAID 5 you're told that you are a crazy person.&lt;/p&gt;
&lt;p&gt;Using RAID 5 is portrayed as an unreasonable risk to the availability of your data …&lt;/p&gt;</summary><content type="html">&lt;p&gt;RAID 5 gets a lot of flak these days. You either run RAID 1, RAID 10 or you use RAID 6, but if you run RAID 5 you're told that you are a crazy person.&lt;/p&gt;
&lt;p&gt;Using RAID 5 is portrayed as an unreasonable risk to the availability of your data. It is suggested that it is likely that you will lose your RAID array at some point. &lt;/p&gt;
&lt;p&gt;That's an unfair representation of the actual risk that surrounds RAID 5. As I see it, the scare about RAID 5 is totally blown out of proportion. &lt;/p&gt;
&lt;p&gt;I would argue that for small RAID arrays with a maximum of five to six drives, it's totally reasonable to use RAID 5 for your home NAS.&lt;/p&gt;
&lt;p&gt;As far as I can tell, the campaign against RAID 5 mainly started with &lt;a href="http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/"&gt;this article from zdnet&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;As you know RAID 5 can tollerate a single drive failure. If a second drive dies and the first drive was not yet replaced or rebuild, you lose all contents of the array. &lt;/p&gt;
&lt;p&gt;In the article the author argues that because drives become bigger but not more reliable, the risk of losing a second drive during a rebuild is so high that running RAID 5 is becoming risky. &lt;/p&gt;
&lt;p&gt;You don't need a second drive failure for you to lose your data. A bad sector, also known as an Unrecoverable Read Error (URE), can also cause problems during a rebuild. Depending on the RAID implementation, you may lose some files or the entire array. &lt;/p&gt;
&lt;p&gt;The author calculates and argues that the risk of such a bad sector or URE is so high with modern high-capacity drives, that this risk of a second drive failure during rebuild is almost unavoidable. &lt;/p&gt;
&lt;p&gt;Most drives have a URE specification of 1 bit error in 12.5 TB of data (10^14). That number is used as an absolute, it's what drives do experience in our daily lives, but that's not true.&lt;/p&gt;
&lt;p&gt;It's a worst-case number. You will see a read error in &lt;em&gt;at-most&lt;/em&gt; 10^14 bits, but in practice drives are way more reliable. &lt;/p&gt;
&lt;p&gt;I run ZFS on my &lt;a href="https://louwrentius.com/74tb-diy-nas-based-on-zfs-on-linux.html"&gt;71 TB ZFS NAS&lt;/a&gt; and I scrub from time to time. &lt;/p&gt;
&lt;p&gt;If that worst-case number were 'real', I would have caught some data errors by now. However, in line with my personal experience, ZFS hasn't corrected a single byte since the system came online a few years ago. &lt;/p&gt;
&lt;p&gt;And I've performed so many scrubs that my system has read over a &lt;em&gt;petabyte&lt;/em&gt; of data. No silent data corruption, no regular bad sectors.&lt;/p&gt;
&lt;p&gt;It seems to me that all those risk aren't nearly as high as it seems.&lt;/p&gt;
&lt;p&gt;I would argue that choosing RAID-5/Z in the right circumstances is reasonable.
RAID-6 is clearly safer than RAID-5 as you can survive the loss of two drives instead of a single drive, but that doesn't mean that RAID-5 is unsafe.&lt;/p&gt;
&lt;p&gt;If you are going to run a RAID 5 array, make sure you run a scrub or patrol read or whatever the name is that your RAID solution uses. A scrub is nothing more than attempt to try and read all data from disk.&lt;/p&gt;
&lt;p&gt;Scrubbing allows detection of bad sectors in advance, so you can replace drives before they cause real problems (like failing during a rebuild). &lt;/p&gt;
&lt;p&gt;If you keep the number of drives in a RAID-5 array low, maybe at most 5 or 6, I think for home users, who need to find a balance between cost and capacity, RAID-5 is an acceptable option.&lt;/p&gt;
&lt;p&gt;And remember: if you care about your data, you need a backup anyway.&lt;/p&gt;
&lt;p&gt;This topic was also discussed on &lt;a href="https://www.reddit.com/r/DataHoarder/comments/515l3t/the_hate_raid5_gets_is_uncalled_for/"&gt;reddit&lt;/a&gt;.&lt;/p&gt;</content><category term="Storage"></category><category term="RAID"></category></entry><entry><title>ZFS: resilver performance of various RAID schemas</title><link href="https://louwrentius.com/zfs-resilver-performance-of-various-raid-schemas.html" rel="alternate"></link><published>2016-01-31T12:00:00+01:00</published><updated>2016-01-31T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2016-01-31:/zfs-resilver-performance-of-various-raid-schemas.html</id><summary type="html">&lt;p&gt;When building your own &lt;em&gt;DIY home&lt;/em&gt; NAS, it is important that you simulate and test drive failures before you put your important data on it. It makes sense to know what to do in case a drive needs to be replaced. I also recommend putting a substantial amount of data …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When building your own &lt;em&gt;DIY home&lt;/em&gt; NAS, it is important that you simulate and test drive failures before you put your important data on it. It makes sense to know what to do in case a drive needs to be replaced. I also recommend putting a substantial amount of data on your NAS and see how long a resilver takes just so you know what to expect. &lt;/p&gt;
&lt;p&gt;There are many reports of people building their own (ZFS-based) NAS who found out after a drive failure that resilvering would take days. If your chosen redundancy level for the VDEV would not protect against a second drive failure in the same VDEV (Mirror, RAID-Z) things may get scary. Especially because drives are quite bussy rebuilding data and the extra load on the remaining drives may increase the risk of a second failure.&lt;/p&gt;
&lt;p&gt;The chosen RAID level for your VDEV, has an impact on the resilver performance.
You may chose to accept lower resilver performance in exchange for additional redundancy (RAID-Z2, RAID-Z3).&lt;/p&gt;
&lt;p&gt;I did wonder though how much those resilver times would differ between the various RAID levels. This is why I decided to run some tests to get some numbers.&lt;/p&gt;
&lt;h3&gt;Test hardware&lt;/h3&gt;
&lt;p&gt;I've used some &lt;a href="https://louwrentius.com/20-disk-18-tb-raid-6-storage-based-on-debian-linux.html"&gt;test equipment&lt;/a&gt; running Debian Jessie + ZFS on Linux. The hardware is rather old and the CPU may have an impact on the results. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;CPU : Intel(R) Core(TM)2 Duo CPU     E7400  @ 2.80GHz
RAM : 8 GB
HBA : HighPoint RocketRaid 2340 (each drive in a jbod)
Disk: Samsung Spinpoint F1 - 1 TB - 7200 RPM ( 12 x )
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Test method&lt;/h3&gt;
&lt;p&gt;I've created a &lt;a href="https://github.com/louwrentius/zfs-resilver-benchmark"&gt;script&lt;/a&gt; that runs all tests automatically. This is how the script works:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create pool + vdev(s).&lt;/li&gt;
&lt;li&gt;Write data on pool ( XX % of pool capacity)&lt;/li&gt;
&lt;li&gt;Replace arbitrary drive with another one.&lt;/li&gt;
&lt;li&gt;Wait for resilver to complete.&lt;/li&gt;
&lt;li&gt;Log resilver duration o csv file. &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For each test, I fill the pool up to 25% with data before I measure resilver performance. &lt;/p&gt;
&lt;h3&gt;Caveats&lt;/h3&gt;
&lt;p&gt;The problem with the pool only being filled for 25% is that drives are fast at the start, but their performance deteriorates significantly as they fill up. This means that you cannot extrapolate the results and calculate resilver times for 50% or 75% pool usage, the numbers are likely worse than that. &lt;/p&gt;
&lt;p&gt;I should run the test again with 50% usage to see if we can demonstrate this effect. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Beware&lt;/em&gt; that this test method is probably only suitable for DIY home NAS builds. Production file systems used within businesses may be way more fragmented and I've been told that this could slow down resilver times dramatically.&lt;/p&gt;
&lt;h3&gt;Test result (lower is better)&lt;/h3&gt;
&lt;p&gt;&lt;img alt="resilver graph" src="https://louwrentius.com/static/images/zfs-resilver-benchmark01.png" /&gt;&lt;/p&gt;
&lt;p&gt;The results can only be used to demonstrate the relative resilver performance differences of the various RAID levels and disk counts per VDEV. &lt;/p&gt;
&lt;p&gt;You should not expect the same performance results for your own NAS as the hardware probably differs significantly from my test setup.&lt;/p&gt;
&lt;h3&gt;Observations&lt;/h3&gt;
&lt;p&gt;I think the following observations can be made: &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Mirrors resilver the fastest even if the number of drives involved is increased.&lt;/li&gt;
&lt;li&gt;RAID-Z resilver performance is on-par with using mirrors when using 5 disks or less.&lt;/li&gt;
&lt;li&gt;RAID-Zx resilver performance deteriorates as the number of drives in a VDEV increases.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I find it interesting that with smaller number of drives in a RAID-Z VDEV, rebuild performance is roughly on par with a mirror setup. If long rebuild times would scare you away from using RAID-Z, maybe it should not. There may be other reasons why you might shy away from RAID-Z, but this doesn't seem one of them.&lt;/p&gt;
&lt;p&gt;RAID-Z2 is often very popular amongst home NAS builders, as it offers a very nice balance between capacity and redundancy. Wider RAID-Z2 VDEVs are more space efficient, but it is also clear that resilver operations take longer. Because RAID-Z2 can tollerate the loss of two drives, I think longer resilver times seem like a reasonable tradeoff. &lt;/p&gt;
&lt;p&gt;It is clear that as you put more disks in a single RAID-Zx VDEV, rebuild times increase. This can be used as an argument to keep the number of drives per VDEV 'reasonable' or to switch to RAID-Z3. &lt;/p&gt;
&lt;h4&gt;25% vs 50% pool usage&lt;/h4&gt;
&lt;p&gt;To me, there's nothing special to see here. The resilver times are on average slightly worse than double the 25% resilver durations. As disks performance start to deteriorate as they fill up (inner tracks are shorter/slower) sequential performance drops. So this is why I would explain the results are slightly worse than perfect linear scaling. &lt;/p&gt;
&lt;h4&gt;Final words&lt;/h4&gt;
&lt;p&gt;I hope this benchmark is of interest to anyone and more importantly, you can run your own by using the aforementioned &lt;a href="https://github.com/louwrentius/zfs-resilver-benchmark"&gt;script&lt;/a&gt;. If you ever want to run your own benchmarks, expect the script to run for days. Leave a comment if you have questions or remarks about these test results or the way testing is done. &lt;/p&gt;</content><category term="Storage"></category><category term="ZFS"></category></entry><entry><title>The 'hidden' cost of using ZFS for your home NAS</title><link href="https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html" rel="alternate"></link><published>2016-01-02T12:00:00+01:00</published><updated>2016-01-02T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2016-01-02:/the-hidden-cost-of-using-zfs-for-your-home-nas.html</id><summary type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update December 2023:&lt;/strong&gt;
In June, it was &lt;a href="https://github.com/openzfs/zfs/pull/12225#issuecomment-1610169213"&gt;announced&lt;/a&gt; that iXsystems would sponsor implementing the VDEV expansion feature. A new &lt;a href="https://github.com/openzfs/zfs/pull/15022"&gt;pr&lt;/a&gt; has been created for this effort. The feature was merged into the code base, but may not be available to the general public before the &lt;a href="https://github.com/openzfs/zfs/pull/15022#issuecomment-1802428899"&gt;end of 2024&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Many …&lt;/p&gt;</summary><content type="html">&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update December 2023:&lt;/strong&gt;
In June, it was &lt;a href="https://github.com/openzfs/zfs/pull/12225#issuecomment-1610169213"&gt;announced&lt;/a&gt; that iXsystems would sponsor implementing the VDEV expansion feature. A new &lt;a href="https://github.com/openzfs/zfs/pull/15022"&gt;pr&lt;/a&gt; has been created for this effort. The feature was merged into the code base, but may not be available to the general public before the &lt;a href="https://github.com/openzfs/zfs/pull/15022#issuecomment-1802428899"&gt;end of 2024&lt;/a&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Many home NAS builders consider using ZFS for their file system. But there is a caveat with ZFS that people should be aware of.&lt;/p&gt;
&lt;p&gt;Although ZFS is free software, implementing ZFS is not free. The key issue is that expanding capacity with ZFS is more expensive compared to legacy RAID solutions.&lt;/p&gt;
&lt;p&gt;With ZFS, you either have to buy all storage you expect to need upfront, or you will be wasting a few hard drives on redundancy you don't need.&lt;/p&gt;
&lt;p&gt;This fact is often overlooked, but it's very important to take it in consideration when planning a NAS build.&lt;/p&gt;
&lt;p&gt;Other software RAID solutions like Linux MDADM lets you grow an existing RAID array with one disk at a time. This is also true for many hardware-based RAID solutions&lt;sup id="fnref:dead"&gt;&lt;a class="footnote-ref" href="#fn:dead"&gt;1&lt;/a&gt;&lt;/sup&gt;. This is ideal for home users because you can expand on a per-need basis. &lt;/p&gt;
&lt;p&gt;ZFS does &lt;strong&gt;not&lt;/strong&gt; allow this!&lt;/p&gt;
&lt;p&gt;To understand why using ZFS may cost you extra money, we will dig a little bit into ZFS itself.&lt;/p&gt;
&lt;h2&gt;Quick recap of ZFS&lt;/h2&gt;
&lt;p&gt;The schema below illustrates the architecture of ZFS. There are a few things you should take away from it. &lt;/p&gt;
&lt;p&gt;&lt;img alt="zfs" src="https://louwrentius.com/static/images/zfs-overview.png" /&gt;&lt;/p&gt;
&lt;p&gt;The main takeaway of this picture is that your ZFS pool and thus your file system is based on one or more VDEVs. And those VDEVs contain the actual hard drives.&lt;/p&gt;
&lt;p&gt;Fault-tolerance or redundancy is addressed within a VDEV. A VDEV is either a mirror (RAID-1),  RAIDZ (RAID-5) or RAIDZ2 (RAID-6)&lt;sup id="fnref:z3"&gt;&lt;a class="footnote-ref" href="#fn:z3"&gt;2&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;So it's important to understand that a ZFS &lt;em&gt;pool&lt;/em&gt; itself is &lt;em&gt;not fault-tolerant&lt;/em&gt;. If you lose a single VDEV within a pool, you lose the whole pool. You lose the pool, all data is lost.&lt;/p&gt;
&lt;h2&gt;You can't add hard drives to a VDEV&lt;/h2&gt;
&lt;p&gt;Now it's very important to understand that you &lt;em&gt;cannot add hard drives to a VDEV&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;This is the key limitation of ZFS as seen from the perspective of home NAS builders. &lt;/p&gt;
&lt;p&gt;To expand the storage capacity of your pool, you need to add extra VDEVs. And because each VDEV needs to take care of its own redundancy, you also need to buy extra drives for parity.&lt;/p&gt;
&lt;p&gt;I will quickly add that there is a way out: replace every hard drive in the VDEV, one by one, with a higher capacity hard drive. You will have to 'rebuild' or 'resilver' the VDEV after each replacement, but it will work, although it's a bit cumbersome and quite expensive.&lt;/p&gt;
&lt;p&gt;So back to the topic at hand: what does this limitation mean in real life? I'll give an example. &lt;/p&gt;
&lt;p&gt;Let's say you plan on building a &lt;a href="https://louwrentius.com/zfs-performance-on-hp-proliant-microserver-gen8-g1610t.html"&gt;small NAS with a capacity of four drives&lt;/a&gt;. Please don't create a three-drive RAID-Z thinking you can just add the fourth drive when you need to, because that's &lt;em&gt;not&lt;/em&gt; possible.&lt;/p&gt;
&lt;p&gt;In this example, you would be better off buying the fourth drive upfront and create a four-drive RAID-Z. This is an example where you are forced to buy the extra space you don't need yet upfront because expanding is otherwise not possible.&lt;/p&gt;
&lt;p&gt;You could have expanded your pool with another VDEV consisting of a minimum of three drives (if you run RAID-Z) but the chassis has only room for one extra drive so that doesn't work.&lt;/p&gt;
&lt;h2&gt;Planning your ZFS Build with the VDEV limitation in mind&lt;/h2&gt;
&lt;p&gt;Many home NAS builders use RAID-6 (RAID-Z2) for their builds, because of the extra redundancy. This makes sense because a double drive failure is not something unheard of, especially during rebuilds where all drives are being taxed quite heavily for many hours. &lt;/p&gt;
&lt;p&gt;I personally would recommend running RAID-Z2 over RAID-Z1 if you go over five to six drives and to spend the extra money on the additional hard drive it requires. Actually, With RAID-Z2 or RAID-6, I think it's perfectly reasonable to run a single VDEV at home with up to 12 drives&lt;sup id="fnref:san"&gt;&lt;a class="footnote-ref" href="#fn:san"&gt;3&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;With RAID-Z2 however, the 'ZFS tax' is even more clearly visible. By having to add an additional VDEV, you will also lose two drives due to parity overhead.&lt;/p&gt;
&lt;p&gt;&lt;img alt="zfs2" src="https://louwrentius.com/static/images/zfs-2vdev.png" /&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Please note that the 'yellow' drives mark the parity/redundancy overhead. It does not mark where parity data lives (it's striped across all drives).&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Let's illustrate the above picture with an example. Your NAS chassis can hold a maximum of twelve drives. You start out with six drives in a RAID-Z2. At some point you want to expand. The cheapest option is to expand with another RAID-Z2 consisting of four drives (minimum size of a RAID-Z2 VDEV). &lt;/p&gt;
&lt;p&gt;With a cost of $150 per hard drive&lt;sup id="fnref:example"&gt;&lt;a class="footnote-ref" href="#fn:example"&gt;5&lt;/a&gt;&lt;/sup&gt;, expanding the capacity of your pool will cost you $600 instead of $150 (single drive) and $300 dollar of the $600 (50%) is wasted on redundancy you don't really need.&lt;/p&gt;
&lt;p&gt;Furthermore, you can no longer expand your pool, so the remaining two drive slots are 'wasted'&lt;sup id="fnref:no"&gt;&lt;a class="footnote-ref" href="#fn:no"&gt;4&lt;/a&gt;&lt;/sup&gt;. You end up with a maximum of ten drives.&lt;/p&gt;
&lt;p&gt;In this example, to make use of the drive capacity of your NAS chassis, you should expand with another six hard drives. That would cost you $900 and $300 of that $900 (33%) is wasted on redundancy. This is illustrated above. &lt;/p&gt;
&lt;p&gt;Storage-wise it's more efficient to expand with six drives instead of four. But it will cost you another $300 to expand, paying for storage you may not immediately need. &lt;/p&gt;
&lt;p&gt;But both options aren't that efficient. Because you end up using four drives for parity where two would - in my view - be sufficient. &lt;/p&gt;
&lt;p&gt;So, if you want to get the most capacity out of that chassis, and the most space per dollar, your only option is to buy all twelve drives upfront and create a single RAID-Z2 consisting of twelve drives. &lt;/p&gt;
&lt;p&gt;&lt;img alt="zfs1" src="https://louwrentius.com/static/images/zfs-1vdev.png" /&gt;&lt;/p&gt;
&lt;p&gt;Buying all drives upfront is expensive and you may only benefit from that extra space years down the road. &lt;/p&gt;
&lt;h2&gt;Summary&lt;/h2&gt;
&lt;p&gt;So I hope this example clearly illustrates the issue at hand. With ZFS, you either need to buy all storage upfront or you will lose hard drives to redundancy you don't need, reducing the maximum storage capacity of your NAS.&lt;/p&gt;
&lt;p&gt;You have to decide what your needs are. ZFS is an awesome file system that offers you way better data integrity protection than other file system + RAID solution combination. &lt;/p&gt;
&lt;p&gt;But implementing ZFS has a certain 'cost'. You must decide if ZFS is worth it for you.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update April 2023:&lt;/strong&gt;
It has been fairly quiet since the announcement of RAIDZ expansion.
The Github &lt;a href="https://github.com/openzfs/zfs/pull/12225"&gt;PR&lt;/a&gt; about this feature is rather stale and people are wondering what the status is and what the plans are. Meanwhile, FreeBSD has announced In February 2023 that they suspect to integrate RAIDZ expansion by Q3. &lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update June 2021&lt;/strong&gt;|
It seems that &lt;a href="https://github.com/openzfs/zfs/pull/12225"&gt;RAIDZ expansion is now being worked on&lt;/a&gt;. It will probably be available somewhere around &lt;a href="https://arstechnica.com/gadgets/2021/06/raidz-expansion-code-lands-in-openzfs-master/"&gt;August 2022&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;I have written a &lt;a href="https://louwrentius.com/zfs-raidz-expansion-is-awesome-but-has-a-small-caveat.html"&gt;blogpost&lt;/a&gt; about this new feature. The bad news is that adding drives to an existing vdev may accrue some overhead, but the good news is that this overhead can be recovered. &lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Update October 2017&lt;/strong&gt; |
Please note that RAIDZ expansion is &lt;a href="https://twitter.com/OpenZFS/status/921042446275944448?s=09"&gt;under development&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote class="twitter-tweet" data-lang="en"&gt;&lt;p lang="en" dir="ltr"&gt;RAIDZ expansion (most requested ZFS feature ever?) is coming, courtesy of &lt;a href="https://twitter.com/freebsdfndation?ref_src=twsrc%5Etfw"&gt;@freebsdfndation&lt;/a&gt;. Sneak preview at OpenZFS DevSummit!&lt;/p&gt;&amp;mdash; OpenZFS (@OpenZFS) &lt;a href="https://twitter.com/OpenZFS/status/921042446275944448?ref_src=twsrc%5Etfw"&gt;October 19, 2017&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async src="https://platform.twitter.com/widgets.js" charset="utf-8"&gt;&lt;/script&gt;

&lt;hr&gt;

&lt;h2&gt;Addressing some feedback&lt;/h2&gt;
&lt;p&gt;I found out that my article was &lt;a href="https://youtu.be/B_OEUfOmU8w?t=11m55s"&gt;discussed on a vodcast of BSDNOW&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This article also got some attention on &lt;a href="https://news.ycombinator.com/item?id=10886068"&gt;hacker news&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;To me, some of the feedback is not 'wrong' but feels rather disingenuous or not relevant for the intended audience of this article. I have provided the links so you can make up your own mind.&lt;/p&gt;
&lt;p&gt;This article has a particular user group in mind so you really should think about how much their needs align with yours.&lt;/p&gt;
&lt;h2&gt;You are steering people away from ZFS&lt;/h2&gt;
&lt;p&gt;No I don't and this is not my intention. I run ZFS myself on two servers. I do feel that sometimes the downsides of ZFS are wiped under the rug and we should be very open and clear about them towards people seeking advice.&lt;/p&gt;
&lt;h3&gt;Use mirrors not RAID-Z(2/3)!&lt;/h3&gt;
&lt;p&gt;Doesn't make much sense to me for home NAS builders.&lt;/p&gt;
&lt;h4&gt;Using mirrors is wasting space&lt;/h4&gt;
&lt;p&gt;Advising people to use mirrors instead of RAID-Z(2/3) I do find a little bit disingenuous. Because you are throwing away 50% of your disk capacity. With RAIDZ you 'lose' 33% for three drives, 25% for four drives. If we look at RAIDZ2, we would 'lose' 33% for six drives, 25% for eight drives and only 20% for ten drives. &lt;/p&gt;
&lt;p&gt;In the end, you are waisting multiple drives worth of storage capacity depending on the number of drives in your pool. &lt;/p&gt;
&lt;h4&gt;Adding mirrors with larger drives&lt;/h4&gt;
&lt;p&gt;As time goes by, larger disks become cheaper. So it could make sense to expand your pool with mirrors based on bigger drives than the original drives you started out on. The size of your pool would increase. However, it's still only 50% space efficient. &lt;/p&gt;
&lt;h4&gt;Random I/O performance is better&lt;/h4&gt;
&lt;p&gt;Using mirrors is running RAID 10. Yes you can expand your pool with two drives at a time, and you gain better random I/O performance. However, the large majority of home NAS builders don't care about random I/O performance. You just care if you can saturate gigabit and have one big pool of storage. In that case, you don't need the random IOPs. &lt;/p&gt;
&lt;p&gt;If you run some VMs from your storage that require high storage performance, it's an entirely different matter. But I expect that most DIY NAS builders just want some storage to put a ton of data on and nothing more.&lt;/p&gt;
&lt;h4&gt;RAIDZ2 is more reliable than using mirrors&lt;/h4&gt;
&lt;p&gt;The redundancy of RAIDZ2 beats using mirrors. (If during a rebuild the surviving member of a mirror fails (the one disk in the pool that is taxed the most during rebuild) you lose your pool. With RAIDZ2 any second drive can fail and you are still OK.&lt;/p&gt;
&lt;p&gt;There is only one 'upside' regarding mirrors that is discussed in the next section.&lt;/p&gt;
&lt;h4&gt;Mirror rebuild times are better&lt;/h4&gt;
&lt;p&gt;The only upside of using mirrors is that in the event a disk has failed and the new disk is being 'resilvered' it is reported that those rebuilds tend to be faster than if you use RAID-Z(2/3). I think this is no different from legacy RAID, where the main difference with ZFS is that ZFS only rebuilds actual data, not the entire disk.&lt;/p&gt;
&lt;h3&gt;ZFS rebuilds are faster&lt;/h3&gt;
&lt;p&gt;This is indeed a benefit of ZFS. The question is how relevant it is for you.&lt;/p&gt;
&lt;p&gt;ZFS only rebuilds data. Legacy RAID just rebuilds every 'bit' on a drive. The latter takes longer than the former. So with legacy RAID, rebuild times depend on the size of a single drive, not on the number of drives in the array, no matter how much data you have stored on your array.&lt;/p&gt;
&lt;p&gt;My old 18 TB server was based on a single twenty-drive RAID 6 using MDADM. It took 5 hours to rebuild a 1 TB drive. If you would have used 4 TB drives, it would have taken 20 hours if I'm allowed to extrapolate. With ZFS - if you would have been using only 50% of capacity - those rebuild times would have been half of this.  &lt;/p&gt;
&lt;p&gt;Personally with RAID6 or with RAIDZ2, rebuild times aren't that a big of a deal as you can lose a second drive and still be safe.&lt;/p&gt;
&lt;h3&gt;Just replace existing drives with bigger ones!&lt;/h3&gt;
&lt;p&gt;I did briefly touch this option in the article above. I will address it again. The problem with this approach is twofold. First, you can't expand storage capacity as you need it. You need to replace &lt;em&gt;all&lt;/em&gt; existing drives with larger ones. &lt;/p&gt;
&lt;p&gt;The procedure itself is also a bit cumbersome and time intensive. You need to replace each drive one by one. And every time, you need to 'resilver' your VDEV. Only when all drives have been replaced you will be able to grow the size of your pool.&lt;/p&gt;
&lt;p&gt;If you are OK with this approach - and people have used it - it is a way to work around the 'ZFS-tax'. &lt;/p&gt;
&lt;h3&gt;Not using ZFS is putting your data at great risk!&lt;/h3&gt;
&lt;p&gt;The BSDNOW podcasts seems to agree with me that if you want true data safety, this 'ZFS-tax' is just the price you have to pay. Either you go with mirrors or you accept the extra parity redundancy. &lt;/p&gt;
&lt;p&gt;It is not my goal to steer you away from ZFS. The above is true. ZFS offers something no other (stable) file system currently offers to home NAS builders. But at a cost.&lt;/p&gt;
&lt;p&gt;The thing is that I find it perfectly reasonable for home NAS users to just buy a Synology, QNAP or some ready-made NAS from another quality brand. That's what the majority of people do and I think it's a reasonable option. I don't think you are taking crazy risks if you would do so.&lt;/p&gt;
&lt;p&gt;If you do build your own &lt;em&gt;home&lt;/em&gt; NAS, it's &lt;em&gt;reasonable&lt;/em&gt; to accept the 'risk' of using Windows with storage spaces or hardware RAID. Or using Linux with MDADM or hardware RAID. I would say: &lt;em&gt;ZFS is clearly technically the better option&lt;/em&gt;, but those 'legacy' options are not so bad that you are taking unreasonable risks with your data. &lt;/p&gt;
&lt;p&gt;So using ZFS is the &lt;em&gt;better&lt;/em&gt; option, it's up to you and your particular needs and circumstances to decide if using ZFS is worth it for you.&lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:dead"&gt;
&lt;p&gt;I believe hardware-based RAID is 100% dead, especially with SSDs but historically speaking hardare RAID allowed for flexible expansion.&amp;#160;&lt;a class="footnote-backref" href="#fnref:dead" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:z3"&gt;
&lt;p&gt;It can even use tripple parity (RAID-Z3) but I doubt many of you will ever need that.&amp;#160;&lt;a class="footnote-backref" href="#fnref:z3" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:san"&gt;
&lt;p&gt;For my own 71 TB storage NAS I decided at that time to run with an eighteen-disk VDEV plus a six-disk VDEV. Not standard, but I decided that I accept the risk.&amp;#160;&lt;a class="footnote-backref" href="#fnref:san" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:no"&gt;
&lt;p&gt;Expanding with a VDEV consisting of a mirrored pair is technically possible but it breaks the RAID-Z2 redundancy. It doesn't make much sense to me.&amp;#160;&lt;a class="footnote-backref" href="#fnref:no" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:example"&gt;
&lt;p&gt;Just an example for illustration purposes.&amp;#160;&lt;a class="footnote-backref" href="#fnref:example" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Storage"></category><category term="ZFS"></category></entry><entry><title>ZFS performance on HP Proliant Microserver Gen8 G1610T</title><link href="https://louwrentius.com/zfs-performance-on-hp-proliant-microserver-gen8-g1610t.html" rel="alternate"></link><published>2015-08-14T12:00:00+02:00</published><updated>2015-08-14T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2015-08-14:/zfs-performance-on-hp-proliant-microserver-gen8-g1610t.html</id><summary type="html">&lt;p&gt;I think the HP Proliant Microserver Gen8 is a very interesting little box if you want to build your own ZFS-based NAS. The benchmarks I've performed seem to confirm this. &lt;/p&gt;
&lt;p&gt;The Microserver Gen8 has nice features such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;iLO (KVM over IP with dedicated network interface)&lt;/li&gt;
&lt;li&gt;support for ECC memory …&lt;/li&gt;&lt;/ul&gt;</summary><content type="html">&lt;p&gt;I think the HP Proliant Microserver Gen8 is a very interesting little box if you want to build your own ZFS-based NAS. The benchmarks I've performed seem to confirm this. &lt;/p&gt;
&lt;p&gt;The Microserver Gen8 has nice features such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;iLO (KVM over IP with dedicated network interface)&lt;/li&gt;
&lt;li&gt;support for ECC memory&lt;/li&gt;
&lt;li&gt;2 x Gigabit network ports&lt;/li&gt;
&lt;li&gt;Free PCIe slot (half-height)&lt;/li&gt;
&lt;li&gt;Small footprint&lt;/li&gt;
&lt;li&gt;Fairly silent&lt;/li&gt;
&lt;li&gt;good build quality&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Microserver Gen8 can be a better solution than the offerings of - for example - Synology or QNAP because you can create a more reliable system based on ECC-memory and ZFS. &lt;/p&gt;
&lt;p&gt;&lt;img alt="gen8" src="https://louwrentius.com/static/images/gen8server.png" /&gt;&lt;/p&gt;
&lt;p&gt;Please note that the G1610T version of the Microserver Gen8 does not ship with a DVD/CD drive as depicted in the image above.&lt;/p&gt;
&lt;p&gt;The Gen8 can be found fairly cheap on the European market at around 240 Euro including taxes and if you put in an extra 8 GB of memory on top of the 2 GB installed you have a total of 10 GB, which is more than enough to support ZFS.&lt;/p&gt;
&lt;p&gt;The Gen8 has room for 4 x 3.5" hard drives so with todays large disk sizes you can pack quite a bit of storage inside this compact machine. &lt;/p&gt;
&lt;p&gt;&lt;img alt="gen82" src="https://louwrentius.com/static/images/gen8server2.png" /&gt;&lt;/p&gt;
&lt;h3&gt;Netto storage capacity:&lt;/h3&gt;
&lt;p&gt;This table gives you a quick overview of the netto storage capacity you would get depending on the chosen drive size and redundancy.&lt;/p&gt;
&lt;table border="0" cellpadding="10" cellspacing="2"&gt;
&lt;tr&gt;&lt;th&gt;Drive size&lt;/th&gt;&lt;th align='right'&gt;RAIDZ&lt;/th&gt;&lt;th align='right'&gt;RAIDZ2 or Mirror&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;3 TB&lt;/td&gt;&lt;td align='right'&gt; 9 TB&lt;/td&gt;&lt;td align='right'&gt; 6 TB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;4 TB&lt;/td&gt;&lt;td align='right'&gt;12 TB&lt;/td&gt;&lt;td align='right'&gt; 8 TB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;6 TB&lt;/td&gt;&lt;td align='right'&gt;18 TB&lt;/td&gt;&lt;td align='right'&gt;12 TB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;8 TB&lt;/td&gt;&lt;td align='right'&gt;24 TB&lt;/td&gt;&lt;td align='right'&gt;16 TB&lt;/td&gt;&lt;/tr&gt;

&lt;/table&gt;

&lt;h3&gt;Boot device&lt;/h3&gt;
&lt;p&gt;If you want to use all four drive slots for storage, you need to boot this machine from either the fifth internal SATA port, the internal USB 2.0 port or the microSD card slot.&lt;/p&gt;
&lt;p&gt;The fifth SATA port is not bootable if you disable the on-board RAID controller and run in pure AHCI mode. This mode is probably the best mode for ZFS as there seems to be no RAID controller firmware active between the disks and ZFS. However, only the four 3.5" drive bays are bootable.&lt;/p&gt;
&lt;p&gt;The fifth SATA port is bootable if you configure SATA to operate in Legacy mode. This is not recommended as you lose the benefits of AHCI such as hot-swap of disks and there are probably also performance penalties. &lt;/p&gt;
&lt;p&gt;The fifth SATA port is also bootable if you &lt;em&gt;enable&lt;/em&gt; the on-board RAID controller, but do &lt;em&gt;not&lt;/em&gt; configure any RAID arrays with the drives you plan to use with ZFS (Thanks Mikko Rytilahti). You do need to put the boot drive in a RAID volume in order to be able to boot from the fifth SATA port.&lt;/p&gt;
&lt;p&gt;The unconfigured drives will just be passed as AHCI devices to the OS and thus can be used in your ZFS array. The big question here is what happens if you encounter read errors or other drive problems that ZFS could handle, but would be a reason for the RAID controller to kick a drive off the SATA bus. I have no information on that.&lt;/p&gt;
&lt;p&gt;I myself used an old 2.5" hard drive with a SATA-to-USB converter which I stuck  in the case (use double-sided tape or velcro to mount it to the PSU). Booting from USB stick is also an option, although a regular 2.5" hard drive or SSD is probably more reliable (flash wear) and faster.&lt;/p&gt;
&lt;h3&gt;Boot performance&lt;/h3&gt;
&lt;p&gt;The Microserver Gen8 takes about 1 minute and 50 seconds just to pass the BIOS boot process and start booting the operating system (you will hear a beep).&lt;/p&gt;
&lt;h3&gt;Test method and equipment&lt;/h3&gt;
&lt;p&gt;I'm running Debian Jessie with the latest stable ZFS-on-Linux 0.6.4.
Please note that reportedly FreeNAS also runs perfectly fine on this box.&lt;/p&gt;
&lt;p&gt;I had to run my tests with the disk I had available: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@debian:~# show disk -sm
-----------------------------------
| Dev | Model              | GB   |   
-----------------------------------
| sda | SAMSUNG HD103UJ    | 1000 |   
| sdb | ST2000DM001-1CH164 | 2000 |   
| sdc | ST2000DM001-1ER164 | 2000 |   
| sdd | SAMSUNG HM250HI    | 250  |   
| sde | ST2000DM001-1ER164 | 2000 |   
-----------------------------------
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The 250 GB is a portable disk connected to the internal USB port. It is used as the OS boot device. The other disks, 1 x 1 TB and 3 x 2 TB are put together in a single RAIDZ pool, which results in 3 TB of storage. &lt;/p&gt;
&lt;h3&gt;Tests with 4-disk RAIDZ VDEV&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@debian:~# zfs list
NAME       USED  AVAIL  REFER  MOUNTPOINT
testpool  48.8G  2.54T  48.8G  /testpool
root@debian:~# zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

    NAME                        STATE     READ WRITE CKSUM
    testpool                    ONLINE       0     0     0
      raidz1-0                  ONLINE       0     0     0
        wwn-0x50000f0008064806  ONLINE       0     0     0
        wwn-0x5000c5006518af8f  ONLINE       0     0     0
        wwn-0x5000c5007cebaf42  ONLINE       0     0     0
        wwn-0x5000c5007ceba5a5  ONLINE       0     0     0

errors: No known data errors
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Because a NAS will face data transfers that are sequential in nature, I've done some tests with 'dd' to measure this performance. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Read performance:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;root@debian:~# dd if=/testpool/test.bin of=/dev/null bs=1M   &lt;br /&gt;
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 162.429 s, 323 MB/s&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Write performance:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;root@debian:~# dd if=/dev/zero of=/testpool/test.bin bs=1M count=50000 conv=sync
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 169.572 s, 309 MB/s&lt;/p&gt;
&lt;h3&gt;Test with 3-disk RAIDZ VDEV&lt;/h3&gt;
&lt;p&gt;After the previous test I wondered what would happen if I would exclude the older 1 TB disk and create a pool with just the 3 x 2 TB drives. This is the result:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Read performance:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;root@debian:~# dd if=/testpool/test.bin of=/dev/null bs=1M conv=sync 
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 149.509 s, 351 MB/s&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Write performance:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;root@debian:~# dd if=/dev/zero of=/testpool/test.bin bs=1M count=50000 conv=sync
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 144.832 s, 362 MB/s&lt;/p&gt;
&lt;p&gt;The performance is clearly better even there's one disk less in the VDEV.
I would have liked to test with an additional 2 TB drive what kind of performance would be achieved with four drives but I only have three. &lt;/p&gt;
&lt;p&gt;The result does show that the pool is more than capable of sustaining gigabit network transfer speeds. &lt;/p&gt;
&lt;p&gt;This is confirmed when performing the actual network file transfers. In the example below, I simulate a copy of a 50 GB test file from the Gen8 towards a test system using NFS. Tests are performed using the 3-disk pool.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NFS read performance:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:~# dd if=/mnt/server/test2.bin of=/dev/null bs=1M
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 443.085 s, 118 MB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;NFS write performance:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:~# dd if=/dev/zero of=/mnt/server/test2.bin bs=1M count=50000 conv=sync 
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 453.233 s, 116 MB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I think these results are excellent. Tests with the 'cp' command give the same results.&lt;/p&gt;
&lt;p&gt;I've also done some test with the SMB/CIFS protocol. I've used a second Linux box as a CIFS client to connect to the Gen8. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CIFS read performance:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:~# dd if=/mnt/test/test.bin of=/dev/null bs=1M
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 527.778 s, 99.3 MB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;CIFS write performance:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:~# dd if=/dev/zero of=/mnt/test/test3.bin bs=1M count=50000 conv=sync
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 448.677 s, 117 MB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Hot-swap support&lt;/h3&gt;
&lt;p&gt;Although it's even printed on the hard drive caddies that hot-swap is not supported, it does seem to work perfectly fine if you run the SATA controller in AHCI mode. &lt;/p&gt;
&lt;h3&gt;Fifth SATA port for SSD SLOG/L2ARC?&lt;/h3&gt;
&lt;p&gt;If you buy a converter cable that converts a floppy power connector to a SATA power connector, you could install an SSD. This SSD can then be used as a dedicated SLOG device and/or L2ARC cache if you have a need for this.&lt;/p&gt;
&lt;h3&gt;RAIDZ, is that OK?&lt;/h3&gt;
&lt;p&gt;If you want maximum storage capacity with redundancy RAIDZ is the only option. RAID6 or two mirrored VDEVs is more reliable, but will reduce available storage space by a third. &lt;/p&gt;
&lt;p&gt;The main risk of RAIDZ is a double-drive failure. As with larger drive sizes, a resilver of a VDEV will take quite some time. It could take more than a day before the pool is resilvered, during which you run without redundancy.&lt;/p&gt;
&lt;p&gt;With the low number of drives in the VDEV the risk of a second drive failure may be low enough to be acceptable. That's up to you.&lt;/p&gt;
&lt;h3&gt;Noise levels&lt;/h3&gt;
&lt;p&gt;In the past, there have been &lt;a href="http://h30499.www3.hp.com/t5/ProLiant-Servers-Netservers/MicroServer-Gen8-is-noisy/td-p/6171563/page/3#.Vc31JLQbaS0"&gt;reports&lt;/a&gt; about the Gen8 making tons of noise because the rear chasis fan spins at a high RPM if the RAID card is set to AHCI mode.&lt;/p&gt;
&lt;p&gt;I myself have not encountered this problem. The machine is almost silent.&lt;/p&gt;
&lt;h3&gt;Power consumption&lt;/h3&gt;
&lt;p&gt;With drives spinning: 50-55 Watt.
With drives standby: 30-35 Watt.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;I think my benchmarks show that the Microserver Gen8 could be an interesting platform if you want to create your own ZFS-based NAS.&lt;/p&gt;
&lt;p&gt;Please note that it is likely that since the Gen9 server platform is already out for some time, HP may release a Gen9 version of the microserver in the near future. However as of August 2015, there is no information on this yet and it is not clear if a successor is going to be released.&lt;/p&gt;</content><category term="Storage"></category><category term="ZFS"></category><category term="microserver"></category></entry><entry><title>The sorry state of CoW file systems</title><link href="https://louwrentius.com/the-sorry-state-of-cow-file-systems.html" rel="alternate"></link><published>2015-03-01T12:00:00+01:00</published><updated>2015-03-01T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2015-03-01:/the-sorry-state-of-cow-file-systems.html</id><summary type="html">&lt;p&gt;I'd like to argue that both ZFS and BTRFS both are incomplete file systems with their own drawbacks and that it may still be a long way off before we have something truly great.&lt;/p&gt;
&lt;p&gt;Both ZFS and BTRFS are two heroic feats of engineering, created by people who are probably …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I'd like to argue that both ZFS and BTRFS both are incomplete file systems with their own drawbacks and that it may still be a long way off before we have something truly great.&lt;/p&gt;
&lt;p&gt;Both ZFS and BTRFS are two heroic feats of engineering, created by people who are probably ten times more capable and smarter than me. There is no question about my appreciation for these file systems and what they accomplish. &lt;/p&gt;
&lt;p&gt;Still, as an end-user, I would like to see some features that are often either missing or not complete. Make no mistake, I believe that both ZFS and BTRFS are probably the best file systems we have today. But they can be much better.&lt;/p&gt;
&lt;p&gt;I want to start with a terse and quick overview on why both ZFS and BTRFS are such great file systems and why you should take some interest in them. &lt;/p&gt;
&lt;p&gt;Then I'd like to discuss their individual drawbacks and explain my argument.&lt;/p&gt;
&lt;h3&gt;Why ZFS and BTRFS are so great&lt;/h3&gt;
&lt;p&gt;Both ZFS and BTRFS are great for two reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;They focus on preserving data integrity&lt;/li&gt;
&lt;li&gt;They simplify storage management&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Data integrity&lt;/h3&gt;
&lt;p&gt;ZFS and BTRFS implement two important techniques that help preserve data. &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Data is checksummed and its checksum is verified to guard against &lt;a href="https://indico.desy.de/contributionDisplay.py?contribId=65&amp;amp;sessionId=42&amp;amp;confId=257"&gt;bit rot&lt;/a&gt; due to broken hard drives or flaky storage controllers. If redundancy is available (RAID), errors can even be corrected. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy-on-Write (CoW), existing data is never overwritten, so any calamity like sudden power loss cannot cause existing data to be in an inconsistent state.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Simplified storage management&lt;/h3&gt;
&lt;p&gt;In the old days, we had MDADM or hardware RAID for redundancy. LVM for logical volume management and then on top of that, we have the file system of choice (EXT3/4, XFS, REISERFS, etc). &lt;/p&gt;
&lt;p&gt;The main problem with this approach is that the layers are not aware of each other and this makes things very inefficient and more difficult to administer. Each layer needs it's own attention. &lt;/p&gt;
&lt;p&gt;For example, if you simply want to expand storage capacity, you need to add drives to your RAID array and expand it. Then, you have to alert the LVM layer of the extra storage and as a last step, grow the file system. &lt;/p&gt;
&lt;p&gt;Both ZFS and BTRFS make capacity expansion a simple one line command that addresses all three steps above. &lt;/p&gt;
&lt;p&gt;Why are ZFS and BTRFS capable of doing this? Because they incorporate RAID, LVM and the file system in one single integrated solution. Each 'layer' is aware of the other, they are tightly integrated. Because of this integration, rebuilds after a drive faillure are often faster than with 'legacy RAID' solutions, because they only need to rebuild the actual data, not the entire drive. &lt;/p&gt;
&lt;p&gt;And I'm not even talking about the joy of snapshots here. &lt;/p&gt;
&lt;h3&gt;The inflexibility of ZFS&lt;/h3&gt;
&lt;p&gt;The storage building block of ZFS is a VDEV. A VDEV is either a single disk (not so interesting) or some RAID scheme, such as mirroring, single-parity (RAIDZ), dual-parity (RAIDZ2) and even tripple-parity (RAIDZ3).&lt;/p&gt;
&lt;p&gt;To me, a big downside to ZFS is the fact that you &lt;em&gt;cannot expand&lt;/em&gt; a VDEV. Ok, the only way you can expand the VDEV is quite convoluted. You have to replace all of the existing drives, one by one, with bigger ones and rebuild the VDEV each time you replace one of the drives. Then, when all drives are of the higher capacity, you can expand your VDEV. This is quite impractical and time-consuming, if you ask me.&lt;/p&gt;
&lt;p&gt;ZFS expects you just to add extra VDEVS. So if you start with a single 6-drive RAIDZ2 (RAID6), you are expected to add another 6-drive RAIDZ2 if you want to expand capacity. &lt;/p&gt;
&lt;p&gt;What I would want to do is just to ad one or two more drives and grow the VDEV, as is possible with many hardware RAID solutions and with "MDADM --grow" for ages.&lt;/p&gt;
&lt;p&gt;Why do I prefer this over adding VDEVS? Because it's quite evident that this is way more economical. If I can just expand my RAIDZ2 from 6 drives to 12 drives, I would only sacrifice two drives for parity. If I add two VDEVS each of them RAIDZ2, I sacrifice four drives (16% vs 33% capacity loss). &lt;/p&gt;
&lt;p&gt;I can imagine that in the enterprise world, this is just not that big of a deal, a bunch of drives are a rounding error on the total budget and availability and performance are more important. Still, I'd like to have this option.&lt;/p&gt;
&lt;p&gt;Either you are forced to buy and implement the storage you may expect to need in the future, or you must add it later on, wasting drives on parity you would otherwise not have done. &lt;/p&gt;
&lt;p&gt;Maybe my wish for a zpool grow option is more geared to hobbyist or home usage of ZFS and ZFS was always focussed on enterprise needs, not the needs of hobbyists. So I'm aware of the context here.&lt;/p&gt;
&lt;p&gt;I'm not done with ZFS however, because the way ZFS works, there is another great inflexibility. If you don't put the 'right' number of drives in a VDEV, you may lose significant portions of storage, which is a side-effect of how ZFS works. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;The following ZFS pool configurations are optimal for modern 4K sector harddrives:
RAID-Z: 3, 5, 9, 17, 33 drives
RAID-Z2: 4, 6, 10, 18, 34 drives
RAID-Z3: 5, 7, 11, 19, 35 drives
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I've seen first-hand with my 71 TiB NAS that if you don't use the optimal number of drives in a VDEV, you may lose whole drives worth of netto storage capacity. In that regard, my 24-drive chassis is very suboptimal. &lt;/p&gt;
&lt;h3&gt;The sad state of RAID on BTRFS&lt;/h3&gt;
&lt;p&gt;BTRFS has none of the downsides of ZFS as described in the previous section as far as I'm aware of. It has plenty of its own, though. First of all: BTRFS is still not stable, especially &lt;a href="http://kernelnewbies.org/Linux_3.19"&gt;the RAID 5/6 part is unstable&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;The RAID 5 and RAID 6 implementation are so new, the ink they were written with is still wet (February 8th 2015). Not something you want to trust your important data to I suppose. &lt;/p&gt;
&lt;p&gt;I did setup a test environment to play a bit with this new Linux kernel (3.19.0) and BTRFS to see how it works and although it is not production-ready yet, I really like what I see. &lt;/p&gt;
&lt;p&gt;With BTRFS you can just add or remove drives to a RAID6 array as you see fit. Add two? Subtract 3? Whatever, the only thing you have to wait for is BTRFS rebalancing the data over either the new or remaining drives. &lt;/p&gt;
&lt;p&gt;This is friggin' awesome. &lt;/p&gt;
&lt;p&gt;If you want to remove a drive, just wait for BTRFS to copy the data from that drive to the other remaining drives and you can remove it. You want to expand storage? Just add the drives to your storage pool and have BTRFS rebalance the data (which may take a while, but it works).&lt;/p&gt;
&lt;p&gt;But I'm still a bit sad. Because BTRFS does not support anything beyond RAID6. No multiple RAID6 (RAID60) arrays or tripple-parity, as ZFS supports for ages. As with my 24-drive file server, putting 24 drives in a single RAID6, starts to feel like I'm asking for trouble. Tripple-parity or RAID 60 would probably be more reasonable. But no luck with BTRFS. &lt;/p&gt;
&lt;p&gt;However, what really frustrates me is &lt;a href="http://blog.ronnyegner-consulting.de/2014/12/10/parity-based-redundancy-raid56triple-parity-and-beyond-on-btrfs-and-mdadm-dec-2014/comment-page-1/#comment-784446"&gt;this article&lt;/a&gt; by Ronny Egner. The author of snapraid, Andrea Mazzoleni, has written a functional patch for BTRFS that implements not only tripple-parity RAID, but even up to &lt;em&gt;six&lt;/em&gt; parity disks for a volume. &lt;/p&gt;
&lt;p&gt;The maddening thing is that the BTRFS maintainers are not planning to include this patch into the BTRFS code base. Please read Ronny's blog. The people working on BTRFS are working for enterprises who want enterprise features. They don't care about tripple-parity or features like that because they have access to something presumably better: distributed file systems, which may do away with the need for larger disk arrays and thus tripple-parity. &lt;/p&gt;
&lt;p&gt;BTRFS is in development for a very long time and only recently has RAID 5/6 support been introduced. The risk of the write-hole, something addressed by ZFS ages ago, is still an open issue. Considering all of this, BTRFS is still a very long way off, of being the file system of choice for larger storage arrays.&lt;/p&gt;
&lt;p&gt;BTRFS seems to be way more flexible in terms of storage expansion or shrinking, but it slow pace of development makes it still unusable for anything serious for at least the next year I guess. &lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;BTRFS addresses all the inflexibilities of ZFS but it's immaturity and lack of more advanced RAID schemes makes it unusable for larger storage solutions. This is so sad because by design it seems to be the better, way more flexible option as compared to ZFS.&lt;/p&gt;
&lt;p&gt;I do understand the view of the BTRFS developers. With the enterprise data sets, at scale, it's better to use distributed file systems to handle storage and redundancy, than on the smaller system scale. But this kind of environment is not reachable for many. &lt;/p&gt;
&lt;p&gt;So at the moment, compared to BTRFS, ZFS is still the better option for people who want to setup large, reliable storage arrays.&lt;/p&gt;</content><category term="Storage"></category><category term="ZFS"></category><category term="BTRFS"></category></entry><entry><title>Configuring SCST iSCSI target on Debian Linux (Wheezy)</title><link href="https://louwrentius.com/configuring-scst-iscsi-target-on-debian-linux-wheezy.html" rel="alternate"></link><published>2015-02-01T12:00:00+01:00</published><updated>2015-02-01T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2015-02-01:/configuring-scst-iscsi-target-on-debian-linux-wheezy.html</id><summary type="html">&lt;p&gt;My goal is to export ZFS zvol volumes through iSCSI to other machines. The platform I'm using is Debian Wheezy. &lt;/p&gt;
&lt;p&gt;There are three iSCSI target solutions available for Linux:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="http://linux-iscsi.org/wiki/Main_Page"&gt;LIO&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="http://iscsitarget.sourceforge.net"&gt;IET&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://scst.sourceforge.net"&gt;SCST&lt;/a&gt; &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I've briefly played with &lt;a href="http://linux-iscsi.org/wiki/Main_Page"&gt;LIO&lt;/a&gt; but the targetcli tool is interactive only. If you want to automate and …&lt;/p&gt;</summary><content type="html">&lt;p&gt;My goal is to export ZFS zvol volumes through iSCSI to other machines. The platform I'm using is Debian Wheezy. &lt;/p&gt;
&lt;p&gt;There are three iSCSI target solutions available for Linux:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="http://linux-iscsi.org/wiki/Main_Page"&gt;LIO&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="http://iscsitarget.sourceforge.net"&gt;IET&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://scst.sourceforge.net"&gt;SCST&lt;/a&gt; &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I've briefly played with &lt;a href="http://linux-iscsi.org/wiki/Main_Page"&gt;LIO&lt;/a&gt; but the targetcli tool is interactive only. If you want to automate and use scripts, you need to learn the Python API. I wonder what's wrong with a plain old text-based configuration file. &lt;/p&gt;
&lt;p&gt;iscsitarget or &lt;a href="http://iscsitarget.sourceforge.net"&gt;IET&lt;/a&gt; is broken on Debian Wheezy. If you just 'apt-get install iscsitarget', the iSCSI service will just crash as soon as you connect to it. This has been the case for years. I wonder why they don't just drop this package. It is true that you can manually download the "latest" version of IET, but don't bother, it seems abandoned. The &lt;a href="http://sourceforge.net/projects/iscsitarget/files/iscsitarget/"&gt;latest release&lt;/a&gt; stems from 2010.&lt;/p&gt;
&lt;p&gt;It seems that &lt;a href="http://scst.sourceforge.net"&gt;SCST&lt;/a&gt; is at least maintained and uses plain old text-based configuration files. So it has that going for it, which is nice. SCST does not require kernel patches to run. But particularly a patch regarding "CONFIG_TCP_ZERO_COPY_TRANSFER_COMPLETION_NOTIFICATION" is said to improve performance. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;To use full power of TCP zero-copy transmit functions, especially
dealing with user space supplied via scst_user module memory, iSCSI-SCST
needs to be notified when Linux networking finished data transmission.
For that you should enable CONFIG_TCP_ZERO_COPY_TRANSFER_COMPLETION_NOTIFICATION
kernel config option. This is highly recommended, but not required.
Basically, iSCSI-SCST works fine with an unpatched Linux kernel with the
same or better speed as other open source iSCSI targets, including IET,
but if you want even better performance you have to patch and rebuild
the kernel.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So in general, patching your kernel is not always required, but an example will be given anyway.&lt;/p&gt;
&lt;h3&gt;Getting the source&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cd /usr/src
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;We need the following files:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;wget http://heanet.dl.sourceforge.net/project/scst/scst/scst-3.0.0.tar.bz2
wget http://heanet.dl.sourceforge.net/project/scst/iscsi-scst/iscsi-scst-3.0.0.tar.bz2
wget http://heanet.dl.sourceforge.net/project/scst/scstadmin/scstadmin-3.0.0.tar.bz2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;We extract them with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;tar xjf scst-3.0.0.tar.bz2
tar xjf iscsi-scst-3.0.0.tar.bz2
tar xjf scstadmin-3.0.0.tar.bz2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Patching the kernel&lt;/h3&gt;
&lt;p&gt;You can skip this part if you don't feel like you need or want to patch your kernel. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get install linux-source kernel-package
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;We need to extract the kernel source:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cd /usr/src
tar xjf linux-source-3.2.tar.bz2
cd linux-source-3.2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now we first copy the kernel configuration from the current system:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cp /boot/config-3.2.0-4-amd64 .config
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;We patch the kernel with two patches:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;patch -p1 &amp;lt; /usr/src/scst-3.0.0/kernel/scst_exec_req_fifo-3.2.patch
patch -p1 &amp;lt; /usr/src/iscsi-scst-3.0.0/kernel/patches/put_page_callback-3.2.57.patch
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It seems that for many different kernel versions, separate patches can be found in the above paths. If you follow these steps at a later date, please check the version numbers.&lt;/p&gt;
&lt;p&gt;The patches are based on stock kernels from kernel.org. I've applied the patches against the Debian-patched kernel and faced no problems, but your milage may vary. &lt;/p&gt;
&lt;p&gt;Let's build the kernel (will take a while):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;yes | make-kpkg -j $(nproc) --initrd --revision=1.0.custom.scst kernel_image
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The 'yes' is piped into the make-kpkg command to answer some questions with 'yes' during compilation. You could also add the appropriate value in the .config file.&lt;/p&gt;
&lt;p&gt;The end-result of this command is a kernel package in .deb format in /usr/src.
Install it like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;dpkg -i /usr/src/&amp;lt;custom kernel image&amp;gt;.deb
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now reboot into the new kernel:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;reboot
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Compiling SCST, ISCS-SCST and SCSTADMIN&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cd /usr/src/scst-3.0.0
make install

cd /usr/src/iscsi-scst-3.0.0
make install

cd /usr/src/scstadmin-3.0.0
make install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Make SCST start at boot&lt;/h3&gt;
&lt;p&gt;On Debian Jessie:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;systemctl enable scst.service
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Configure SCST&lt;/h3&gt;
&lt;p&gt;Copy the example configuration file to /etc:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cp /usr/src/iscsi-scst-3.0.0/etc/scst.conf /etc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Edit /etc/scst.conf to your liking. This is an example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;HANDLER vdisk_fileio {
        DEVICE disk01 {
                filename /dev/sdb
                nv_cache 1
        }
}

TARGET_DRIVER iscsi {
        enabled 1

        TARGET iqn.2015-10.net.vlnb:tgt {
                IncomingUser &amp;quot;someuser somepasswordof12+chars&amp;quot;
                HeaderDigest   &amp;quot;CRC32C,None&amp;quot;
                DataDigest   &amp;quot;CRC32C,None&amp;quot;
                LUN 0 disk01

                enabled 1
        }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please note that the &lt;strong&gt;password must be at least 12 characters&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;After this, you can start the SCST module and connect your initiator to the appropriate LUN.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/etc/init.d/scst start
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Closing words&lt;/h3&gt;
&lt;p&gt;It turned out that setting up SCST and compiling a kernel wasn't that much of a hassle. The main issue with patching kernels is that you have to repeat the procedure every time a new kernel version is released. And there is always a risk that a new kernel version breaks the SCST patches. &lt;/p&gt;
&lt;p&gt;However, the whole process can be easily automated and thus run as a test in a virtual environment. &lt;/p&gt;</content><category term="Storage"></category><category term="iSCSI"></category><category term="SCST"></category></entry><entry><title>Why I Do Use ZFS as a File System for My NAS</title><link href="https://louwrentius.com/why-i-do-use-zfs-as-a-file-system-for-my-nas.html" rel="alternate"></link><published>2015-01-29T12:00:00+01:00</published><updated>2015-01-29T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2015-01-29:/why-i-do-use-zfs-as-a-file-system-for-my-nas.html</id><summary type="html">&lt;p&gt;On February 2011, I posted an article about my motivations &lt;a href="https://louwrentius.com/why-i-do-not-use-zfs-as-a-file-system-for-my-nas.html"&gt;why I did &lt;em&gt;not&lt;/em&gt; use ZFS&lt;/a&gt; as a file system for my &lt;a href="https://louwrentius.com/20-disk-18-tb-raid-6-storage-based-on-debian-linux.html"&gt;18 TB NAS&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;You have to understand that &lt;em&gt;at the time&lt;/em&gt;, I believe the arguments in the article were relevant, but much has changed since then, and I …&lt;/p&gt;</summary><content type="html">&lt;p&gt;On February 2011, I posted an article about my motivations &lt;a href="https://louwrentius.com/why-i-do-not-use-zfs-as-a-file-system-for-my-nas.html"&gt;why I did &lt;em&gt;not&lt;/em&gt; use ZFS&lt;/a&gt; as a file system for my &lt;a href="https://louwrentius.com/20-disk-18-tb-raid-6-storage-based-on-debian-linux.html"&gt;18 TB NAS&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;You have to understand that &lt;em&gt;at the time&lt;/em&gt;, I believe the arguments in the article were relevant, but much has changed since then, and I do believe this article is not relevant anymore.&lt;/p&gt;
&lt;p&gt;My stance on ZFS is in the context of a &lt;em&gt;home NAS build&lt;/em&gt;. &lt;/p&gt;
&lt;p&gt;I really recommend giving ZFS a serious consideration if you are building your own NAS. It's probably the best file system you can use if you care about data integrity.&lt;/p&gt;
&lt;p&gt;ZFS may only be available for non-Windows operating systems, but there are quite a few easy-to-use NAS distros available that turn your hardware into a full-featured home NAS box, that can be managed through your web browser. A few examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.freenas.org/"&gt;FreeNAS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.nas4free.org"&gt;NAS4free&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://zfsguru.com"&gt;ZFSguru&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I also want to add this: I don't think it's &lt;em&gt;wrong&lt;/em&gt; or particular risky if you - as a home NAS builder - would decide &lt;strong&gt;not&lt;/strong&gt; to use ZFS and select a 'legacy' solution if that better suits your needs. I think that proponents of ZFS often overstate the risks ZFS mitigates a bit, maybe to promote ZFS. I do think those risks are relevant but it all depends on your circumstances. So you decide.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;May 2016:&lt;/strong&gt; I have also written a &lt;a href="https://louwrentius.com/should-i-use-zfs-for-my-home-nas.html"&gt;separate article&lt;/a&gt;  on how I feel about using ZFS for DIY home NAS builds.&lt;/p&gt;
&lt;p&gt;Arstechnica article about &lt;a href="http://arstechnica.com/information-technology/2014/06/the-ars-nas-distribution-shootout-freenas-vs-nas4free/1/"&gt;FreeNAS vs NAS4free&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you are quite familiar with FreeBSD or Linux, I do recommend this &lt;a href="http://arstechnica.com/information-technology/2014/02/ars-walkthrough-using-the-zfs-next-gen-filesystem-on-linux/"&gt;ZFS how-to article&lt;/a&gt; from Arstechnica. It offers a very nice introduction to ZFS and explains terms like 'pool' and 'vdev'. &lt;/p&gt;
&lt;p&gt;If you are planning on using ZFS for your own home NAS, I would recommend reading the following articles:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html"&gt;Things you should consider when building a ZFS NAS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://louwrentius.com/things-you-should-consider-when-building-a-zfs-nas.html"&gt;The 'hidden' cost of ZFS for your home NAS build&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;My historical reasons for not using ZFS at the time&lt;/h3&gt;
&lt;p&gt;When I started with my 18 TB NAS in 2009, there was no such thing as ZFS for Linux. ZFS was only available in a stable version for Open Solaris. We all know what happened to Open Solaris &lt;a href="http://en.wikipedia.org/wiki/OpenSolaris"&gt;(it's gone)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So you might ask: "Why not use ZFS on FreeBSD then?". Good question, but it was bad timing:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;The FreeBSD implementation of ZFS became only stable [sic] in January 2010, 6 months after I build my NAS (summer 2009). So FreeBSD was not an option at that time.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;One of the other objections against ZFS is the fact that you cannot expand your storage by adding single drives and growing the array as your data set grows. &lt;/p&gt;
&lt;p&gt;A ZFS pool consists of one or more VDEVs. A VDEV is a traditional RAID-array. You expand storage capacity by expanding the ZFS pool, not the VDEVS. You cannot expand the VDEV itself. You can only add VDEVS to a pool. &lt;/p&gt;
&lt;p&gt;So ZFS either forces you to invest in storage you don't need upfront, or it forces you invest later on because you may waste quite a few extra drives on parity. For example, if you start with a 6-drive RAID6 (RAIDZ) configuration, you will probably expand with another 6 drives. So the pool has 4 parity drives on 12 total drives (33% loss). Investing upfront in 10 drives instead of 6 would have been more efficient because you only lose 2 drives out of 10 to parity (20% loss).&lt;/p&gt;
&lt;p&gt;So at the time, I found it reasonable to stick with what I knew: Linux &amp;amp; MDADM.&lt;/p&gt;
&lt;p&gt;But my &lt;a href="https://louwrentius.com/71-tib-diy-nas-based-on-zfs-on-linux.html"&gt;new 71 TiB NAS&lt;/a&gt; is based on ZFS.&lt;/p&gt;
&lt;p&gt;I wrote &lt;a href="https://louwrentius.com/the-future-of-zfs-now-that-opensolaris-is-dead.html"&gt;an article&lt;/a&gt; about my worry that ZFS may die with FreeBSD as it sole backing, but fortunately, I've been proven very, very wrong.&lt;/p&gt;
&lt;p&gt;ZFS is now supported on FreeBSD and &lt;a href="http://zfsonlinux.org"&gt;Linux&lt;/a&gt;. Despite some licencing issues that prevent ZFS from being integrated in the Linux kernel itself, it can still be used as a regular kernel module and it works perfectly. &lt;/p&gt;
&lt;p&gt;There is even an &lt;a href="http://open-zfs.org/wiki/Main_Page"&gt;open-source ZFS consortium&lt;/a&gt; that brings together all the developers for the different operating systems supporting ZFS.&lt;/p&gt;
&lt;p&gt;ZFS is here to stay for a very long time. &lt;/p&gt;</content><category term="ZFS"></category><category term="ZFS"></category></entry><entry><title>FreeBSD 10.1 unattended install over PXE &amp; HTTP (no NFS)</title><link href="https://louwrentius.com/freebsd-101-unattended-install-over-pxe-http-no-nfs.html" rel="alternate"></link><published>2015-01-16T12:00:00+01:00</published><updated>2015-01-16T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2015-01-16:/freebsd-101-unattended-install-over-pxe-http-no-nfs.html</id><summary type="html">&lt;p&gt;To gain some more experience with FreeBSD, I decided to make a PXE-based unattended installation of FreeBSD 10.1. &lt;/p&gt;
&lt;p&gt;My goal is to set something up similar to Debian/Ubuntu + preseeding or Redhat/CentOS + kickstart.&lt;/p&gt;
&lt;p&gt;Getting a PXE-based unattended installation of FreeBSD 10.1 was not easy and I was …&lt;/p&gt;</summary><content type="html">&lt;p&gt;To gain some more experience with FreeBSD, I decided to make a PXE-based unattended installation of FreeBSD 10.1. &lt;/p&gt;
&lt;p&gt;My goal is to set something up similar to Debian/Ubuntu + preseeding or Redhat/CentOS + kickstart.&lt;/p&gt;
&lt;p&gt;Getting a PXE-based unattended installation of FreeBSD 10.1 was not easy and I was unable to automate a ZFS-based install using bsdinstall.&lt;/p&gt;
&lt;p&gt;I would expect someting like the netboot install &lt;/p&gt;
&lt;p&gt;Below, I've documented what I've done to do a basic installation of FreeBSD using only DHCP, TFTP, no NFS required.  &lt;/p&gt;
&lt;h3&gt;Overview of all the steps:&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;have a working DHCP with PXE boot options &lt;/li&gt;
&lt;li&gt;have a working TFTP server&lt;/li&gt;
&lt;li&gt;customise your pxelinux boot menu&lt;/li&gt;
&lt;li&gt;install a FreeBSD box manually, or use an existing one&lt;/li&gt;
&lt;li&gt;download and install &lt;a href="http://mfsbsd.vx.sk"&gt;mfsbsd&lt;/a&gt; on the FreeBSD system&lt;/li&gt;
&lt;li&gt;download a FreeBSD release iso image on the FreeBSD system &lt;/li&gt;
&lt;li&gt;configure and customise your FreeBSD PXE boot image settings&lt;/li&gt;
&lt;li&gt;build the PXE boot image and copy it to your TFTP server&lt;/li&gt;
&lt;li&gt;PXE boot your system and boot the FreeBSD image &lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Setting up a DHCP server + TFTP server&lt;/h3&gt;
&lt;p&gt;Please take a look at &lt;a href="https://louwrentius.com/automated-install-of-debian-linux-based-on-pxe-net-booting.html"&gt;another article&lt;/a&gt; I wrote on setting up PXE booting.&lt;/p&gt;
&lt;h3&gt;Configuring the PXE boot menu&lt;/h3&gt;
&lt;p&gt;Add these lines to your PXE Menu:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;LABEL FreeBSD10
kernel memdisk
append initrd=BSD/FreeBSD/10.1/mfsbsd-10.1-RC3-amd64.img harddisk raw
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Setup or gain access to a FreeBSD host&lt;/h3&gt;
&lt;p&gt;You need to setup or gain access to a FreeBSD system, because the mfsbsd tool only works on FreeBSD. You will use this system to generate a FreeBSD PXE boot image.&lt;/p&gt;
&lt;h3&gt;Installing mfsbsd&lt;/h3&gt;
&lt;p&gt;First we download &lt;a href="http://mfsbsd.vx.sk"&gt;mfsbsd&lt;/a&gt;. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;fetch http://mfsbsd.vx.sk/release/mfsbsd-2.1.tar.gz
tar xzf mfsbsd-2.1.tar.gz
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then we get a FreeBSD ISO:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;fetch http://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/10.1/FreeBSD-10.1-RELEASE-amd64-disc1.iso
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Mount the ISO:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mdconfig -a -t vnode -f /root/FreeBSD-10.1-RELEASE-amd64-disc1.iso
mount_cd9660 /dev/md0 /cdrom/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;setup rc.local&lt;/h3&gt;
&lt;p&gt;Enter the mfsbsd-2.1 directory. Put the following content in the conf/rc.local file.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;fetch http://&amp;lt;yourwebserver&amp;gt;/pxe/installerconfig -o /etc/installerconfig
tail -n 7 /etc/rc.local &amp;gt; /tmp/start.sh
chmod +x /tmp/start.sh
/tmp/start.sh 
exit 0

#!/bin/csh
setenv DISTRIBUTIONS &amp;quot;kernel.txz base.txz&amp;quot;
setenv BSDINSTALL_DISTDIR /tmp
setenv BSDINSTALL_DISTSITE
ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/10.1-RELEASE

bsdinstall distfetch 
bsdinstall script /etc/installerconfig
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;As you can see there is a script within a script that is executed separately by rc.local. That's a bit ugly but it does work.&lt;/p&gt;
&lt;h3&gt;setup installerconfig (FreeBSD unattended install)&lt;/h3&gt;
&lt;p&gt;The 'installerconfig' script is a script in a special format used by the bsdinstall tool to automate the installation. The top is used to control variables used during the unattended installation. The bottom is a script executed post-install chrooted on the new system. &lt;/p&gt;
&lt;p&gt;Put this in 'installerconfig'&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;PARTITIONS=da0
DISTRIBUTIONS=&amp;quot;kernel.txz base.txz&amp;quot;
BSDINSTALL_DISTDIR=/tmp
BSDINSTALL_DISTSITE=ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/10.1-RELEASE

#!/bin/sh
echo &amp;quot;Installation complete, running in host system&amp;quot;
echo &amp;quot;hostname=\&amp;quot;FreeBSD\&amp;quot;&amp;quot; &amp;gt;&amp;gt; /etc/rc.conf
echo &amp;quot;autoboot_delay=\&amp;quot;5\&amp;quot;&amp;quot; &amp;gt;&amp;gt; /boot/loader.conf
echo &amp;quot;sshd_enable=YES&amp;quot; &amp;gt;&amp;gt; /etc/rc.conf
echo &amp;quot;Setup done&amp;quot; &amp;gt;&amp;gt; /tmp/log.txt
echo &amp;quot;Setup done.&amp;quot;
poweroff
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;As you can see, the post-install script enables SSH, sets the hostname and reduced the autoboot delay.&lt;/p&gt;
&lt;p&gt;Please note that I faced an issue where the bsdinstall program would not interpret the options set in the installerconfig script. This is why I exported them with 'setenv' in the rc.local script.&lt;/p&gt;
&lt;p&gt;With Debian preseeding or Redhat kickstarting, you can host the preseed or kickstart file on a webserver. Changing the PXE-based installation is just a matter of edditing the preseed or kickstart file on the webserver. &lt;/p&gt;
&lt;p&gt;Because it's not fun having to generate a new image every time your want to update your unattended installation, it's recommended to host the installerconfig file on a webserver, as if it is a preseed or kickstart file. &lt;/p&gt;
&lt;p&gt;This saves you from having to regenerate the PXE-boot image file every time.&lt;/p&gt;
&lt;p&gt;You can still put the installer config in the image itself. If you want a fixed 'installerconfig' file containing the bsdinstall instructions, put this file also in the 'conf' directory. Next, edit the Makefile. Search for this string:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;.for FILE in rc.conf ttys
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;For me, it was at line 315. Change it to:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;.for FILE in rc.conf ttys installerconfig
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Building the PXE boot image&lt;/h3&gt;
&lt;p&gt;Now everything is configured, we can generate the boot image with mfsbsd. 
Run 'make'. Then when it fails with this error:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Creating image file ...
/root/mfsbsd-2.1/tmp/mnt: write failed, filesystem is full
*** Error code 1

Stop.
make: stopped in /root/mfsbsd-2.1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;just run 'make' again. In my experience, make worked the second time, consistently. I'm not sure why this happens. &lt;/p&gt;
&lt;p&gt;The end result of this whole process is a file like 'mfsbsd-se-10.1-RC3-amd64.img'. &lt;/p&gt;
&lt;p&gt;You can copy this image to the appropriate folder on your TFTP server. In my example it would be:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/srv/tftp/BSD/FreeBSD/10.1/mfsbsd-10.1-RC3-amd64.img
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Test the PXE installation&lt;/h3&gt;
&lt;p&gt;Boot a test machine from PXE and boot your custom generated image. &lt;/p&gt;
&lt;h3&gt;Final words&lt;/h3&gt;
&lt;p&gt;I'm a bit unhappy about how difficult it was to create an PXE-based unattended FreeBSD installation. The bsdinstall installation software seems buggy to me. However, it could be just me: that I have misunderstood how it al works. However, I can't seem to find any documentation on how to properly use the bsdinstall system for an unattended installation. &lt;/p&gt;
&lt;p&gt;If anyone has suggestions or ideas to implement an unattended bsdinstall script 'properly', with ZFS support, I'm all ears.&lt;/p&gt;
&lt;p&gt;This is the recipe I tried to use to get a root-on-zfs install:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ZFSBOOT_POOL_NAME=TEST_ROOT
ZFSBOOT_VDEV_TYPE=mirror
ZFSBOOT_POOL_SIZE=10g
ZFSBOOT_DISKS=&amp;quot;da0 da1&amp;quot;
ZFSBOOT_SWAP_SIZE=2g
ZFSBOOT_CONFIRM_LAYOUT=1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The installer would never recognise the second disk and the script would get stuck. &lt;/p&gt;
&lt;p&gt;I'm aware that mfsbsd has an option to use a custom root-on-zfs script, but I wanted to use the 'official' FreeBSD tools. &lt;/p&gt;</content><category term="Uncategorized"></category><category term="PXE"></category></entry><entry><title>Creating configuration backups of HP procurve switches</title><link href="https://louwrentius.com/creating-configuration-backups-of-hp-procurve-switches.html" rel="alternate"></link><published>2015-01-12T12:00:00+01:00</published><updated>2015-01-12T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2015-01-12:/creating-configuration-backups-of-hp-procurve-switches.html</id><summary type="html">&lt;p&gt;I've created a tool called &lt;a href="https://github.com/louwrentius/procurve-watch"&gt;procurve-watch&lt;/a&gt;. It creates a backup of the running switch configuration through secure shell (using scp). &lt;/p&gt;
&lt;p&gt;It also diffs backed up configurations against older versions, in order to keep track of changes. If you run the script from cron every hour or so, you will be …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I've created a tool called &lt;a href="https://github.com/louwrentius/procurve-watch"&gt;procurve-watch&lt;/a&gt;. It creates a backup of the running switch configuration through secure shell (using scp). &lt;/p&gt;
&lt;p&gt;It also diffs backed up configurations against older versions, in order to keep track of changes. If you run the script from cron every hour or so, you will be notified by email of any (running) configuration changes.&lt;/p&gt;
&lt;p&gt;The tool can backup hundreds of switches in seconds as it is running the configuration copy in parallel. &lt;/p&gt;
&lt;p&gt;A tool like &lt;a href="http://www.shrubbery.net/rancid/"&gt;Rancid&lt;/a&gt; may actually be the best choice for this task, but it didn't work. The latest version of Rancid doesn't support HP Procurve switches (yet) and older versions created backups containing garbled characters.&lt;/p&gt;
&lt;p&gt;I've &lt;a href="https://github.com/louwrentius/procurve-watch"&gt;released it on github&lt;/a&gt;, check it out and let me know if it works for you and you have suggestions to improve it further.&lt;/p&gt;</content><category term="Networking"></category><category term="Networking"></category></entry><entry><title>Configuring, attacking and securing VRRP on Linux</title><link href="https://louwrentius.com/configuring-attacking-and-securing-vrrp-on-linux.html" rel="alternate"></link><published>2015-01-02T12:00:00+01:00</published><updated>2015-01-02T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2015-01-02:/configuring-attacking-and-securing-vrrp-on-linux.html</id><summary type="html">&lt;p&gt;The VRRP or Virtual Router Redundancy Protocol helps you create a reliable network by using multiple routers in an active/passive configuration. If the primary router fails, the backup router takes over almost seamlessly. &lt;/p&gt;
&lt;p&gt;This is how VRRP works:&lt;/p&gt;
&lt;p&gt;&lt;img alt="vrrp" src="https://louwrentius.com/static/images/vrrp.png" /&gt;&lt;/p&gt;
&lt;p&gt;Clients connect to a virtual IP-address. It is called virtual because …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The VRRP or Virtual Router Redundancy Protocol helps you create a reliable network by using multiple routers in an active/passive configuration. If the primary router fails, the backup router takes over almost seamlessly. &lt;/p&gt;
&lt;p&gt;This is how VRRP works:&lt;/p&gt;
&lt;p&gt;&lt;img alt="vrrp" src="https://louwrentius.com/static/images/vrrp.png" /&gt;&lt;/p&gt;
&lt;p&gt;Clients connect to a virtual IP-address. It is called virtual because the IP-address is not hard-coded to a particular interface on any of the routers. &lt;/p&gt;
&lt;p&gt;If a client asks for the MAC-address that is tied to the virtual IP, the master will respond with its MAC-address. If the master dies, the backup router will notice and start responding to ARP-requests.&lt;/p&gt;
&lt;p&gt;Let's take a look at the ARP table on the client to illustrate what is happening.&lt;/p&gt;
&lt;p&gt;Master is active:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;(10.0.1.140) at 0:c:29:a7:7d:f2 on en0 ifscope [ethernet]
(10.0.1.141) at 0:c:29:a7:7d:f2 on en0 ifscope [ethernet]
(10.0.1.142) at 0:c:29:b2:5b:7c on en0 ifscope [ethernet]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Master has failed and backup has taken over:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;(10.0.1.140) at 0:c:29:b2:5b:7c on en0 ifscope [ethernet]
(10.0.1.141) at 0:c:29:a7:7d:f2 on en0 ifscope [ethernet]
(10.0.1.142) at 0:c:29:b2:5b:7c on en0 ifscope [ethernet]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Notice how the MAC-address of the virtual IP (.140) is now that of the backup router.&lt;/p&gt;
&lt;h3&gt;Configuring VRRP on Linux&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;configure static IP-addresses on the primary and backup router. Do not configure the virtual IP on any of the interfaces. In my test environment, I used 10.0.1.141 for the master and 10.0.1.142 for the backup router.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Because the virtual IP-address is not configured on any of the interfaces, Linux will not reply to any packets destined for this IP. This behaviour needs to be changed or VRRP will not work. Edit /etc/sysctl.conf and add this line:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;net.ipv4.ip_nonlocal_bind=1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run this command to active this setting:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;sysctl -p
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Keepalived&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get install keepalived
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Sample configuration of /etc/keepalived/keepalived.conf for MASTER&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;vrrp_instance VI_1 {
    interface eth0
    state MASTER
    virtual_router_id 51
    priority 101

    authentication {
        auth_type AH
        auth_pass monkey
    }

    virtual_ipaddress {
        10.0.1.140
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Sample configuration of /etc/keepalived/keepalived.conf for SLAVE&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;vrrp_instance VI_1 {
    interface eth0
    state BACKUP
    virtual_router_id 51
    priority 100

    authentication {
        auth_type AH
        auth_pass monkey
    }

    virtual_ipaddress {
        10.0.1.140
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Start keepalived:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;service keepalived start
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The only configuration difference regarding keepalived between the master and the standby router is the 'priority' setting. The master server should have a higher priority than the backup router (101 vs. 100).&lt;/p&gt;
&lt;p&gt;As there can be multiple VRRP configurations active within the same subnet, it is important that you make sure that you set a unique virtual_router_id. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Please do not forget to set your own password in case you enable authentication.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;VRRP failover example&lt;/h3&gt;
&lt;p&gt;This is what happens if the master is shutdown:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;64 bytes from 10.0.1.140: icmp_seq=148 ttl=64 time=0.583 ms
64 bytes from 10.0.1.140: icmp_seq=149 ttl=64 time=0.469 ms
64 bytes from 10.0.1.140: icmp_seq=150 ttl=64 time=0.267 ms
Request timeout for icmp_seq 151
Request timeout for icmp_seq 152
Request timeout for icmp_seq 153
Request timeout for icmp_seq 154
64 bytes from 10.0.1.140: icmp_seq=155 ttl=64 time=0.668 ms
64 bytes from 10.0.1.140: icmp_seq=156 ttl=64 time=0.444 ms
64 bytes from 10.0.1.140: icmp_seq=157 ttl=64 time=0.510 ms
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;After about five seconds (default) the standby router takes over and starts responding to the virtual IP. &lt;/p&gt;
&lt;h3&gt;Security&lt;/h3&gt;
&lt;p&gt;A host within the same subnet could just spoof VRRP packets and disrupt service. &lt;/p&gt;
&lt;p&gt;An attack on VRRP is not just theoretical. A tool called &lt;a href="https://www.youtube.com/watch?v=Eq91ETxeJeQ"&gt;Loki&lt;/a&gt; allows you to take over the virtual IP-address and become the master router. This will allow you to create a DoS or sniff all traffic.  &lt;/p&gt;
&lt;p&gt;VRRP security is also discussed in &lt;a href="https://media.blackhat.com/bh-us-10/whitepapers/Rey_Mende/BlackHat-USA-2010-Mende-Graf-Rey-loki_v09-wp.pdf"&gt;this document&lt;/a&gt; from the Loki developers.&lt;/p&gt;
&lt;p&gt;According to &lt;a href="https://tools.ietf.org/html/rfc3768"&gt;rfc3768&lt;/a&gt; authentication and security has been deliberately omitted (see section 10 Security Considerations) from newer versions of the VRRP protocol RFC. &lt;/p&gt;
&lt;p&gt;The main argument is that any malicious device in a layer 2 network can stage similar attacks focussing on ARP-spoofing and ARP-poisoning so as the fundament is already insecure, why care about VRRP? &lt;/p&gt;
&lt;p&gt;I understand the reasoning but I disagree. If you do have a secure Layer 2 environment, VRRP becomes the weakest link. Either you really need to filter out VRRP traffic originating from untrusted ports/devices, or implement security on VRRP itself.&lt;/p&gt;
&lt;h3&gt;Attacking VRRP with Loki&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;I have actually used Loki on VRRP and I can confirm it works (at least) as a Denial-of-Service tool.&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;I used Kali (Formerly known as Back-Track) and installed Loki according to &lt;a href="https://forums.kali.org/showthread.php?4768-Installing-loki-on-kali-linux-amd64"&gt;these instructions&lt;/a&gt;. Please note the bottom of the page.&lt;/p&gt;
&lt;p&gt;What I did on Kali Linux: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get install python-dpkt python-dumbnet
wget http://c0decafe.de/svn/codename_loki/packages/kali-1/pylibpcap_0.6.2-1_amd64.deb
wget http://c0decafe.de/svn/codename_loki/packages/kali-1/loki_0.2.7-1_amd64.deb
dpkg -i pylibpcap_0.6.2-1_amd64.deb
dpkg -i loki_0.2.7-1_amd64.deb
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then just run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;loki.py
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;img alt="vrrp attack" src="https://louwrentius.com/static/images/vrrp-attack.png" /&gt;&lt;/p&gt;
&lt;p&gt;This is only an issue if you already protected yourself against ARP- and IP-spoofing attacks.&lt;/p&gt;
&lt;h3&gt;Protecting VRRP against attacks&lt;/h3&gt;
&lt;p&gt;Keepalived offers two authentication types regarding VRRP: &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;PASS (plain-text password)&lt;/li&gt;
&lt;li&gt;AH (IPSEC-AH (authentication header))&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The PASS option is totally useless from a security perspective.&lt;/p&gt;
&lt;p&gt;&lt;img alt="pass authentication" src="https://louwrentius.com/static/images/vrrp-auth-pass.png" /&gt;&lt;/p&gt;
&lt;p&gt;As you can see, the password 'monkey' is visible and easily obtained from the VRRP multicast advertisements. So to me, it does not make sense to use this option. Loki just replayed the packets and could still create a DoS.&lt;/p&gt;
&lt;p&gt;So we are left with IPSEC-AH, wich is more promising as it actually does some cryptography using the IPSEC protocol, so there is no clear-text password to be captured. I'm not a crypto expert, so I'm not sure how secure this implementation is. Here is &lt;a href="http://www.keepalived.org/draft-ietf-vrrp-ipsecah-spec-00.txt"&gt;some more info on IPSEC-AH&lt;/a&gt; as implemented in Keepalived.&lt;/p&gt;
&lt;p&gt;&lt;img alt="AH authentication" src="https://louwrentius.com/static/images/vrrp-auth-ah.png" /&gt;&lt;/p&gt;
&lt;p&gt;If I configure AH authentication, the Loki tool does not recognise the VRRP trafic anymore and it's no longer possible to use this simple script-kiddie-friendly tool to attack your VRRP setup.&lt;/p&gt;
&lt;p&gt;IPSEC-AH actually introduces an IPSEC-AH header between the IP section and the VRRP section of a packet, so it changes the packet format, which probably makes it unrecognisable for Loki.&lt;/p&gt;
&lt;h3&gt;Running VRRP multicast traffic on different network segments&lt;/h3&gt;
&lt;p&gt;It has been &lt;a href="http://www.reddit.com/r/sysadmin/comments/2r1qho/configuring_attacking_and_securing_vrrp_on_linux/"&gt;pointed out to me by XANi_&lt;/a&gt; that it is possible with Keepalived to keep the virtual IP-address and the VRRP multicast traffic in different networks. Clients will therefore not be able to attack the VRRP traffic. &lt;/p&gt;
&lt;p&gt;In this case, security on the VRRP traffic is not relevant anymore and you don't really need to worry about authentication, assuming that untrusted devices don't have access to that 'VRRP' VLAN. &lt;/p&gt;
&lt;p&gt;Th first step is that both routers should have their physical interface in the same (untagged) VLAN. The trick is then to specify the virtual IP-addresses in the appropriate VLANs like this example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;virtual_ipaddress {

    10.0.1.1/24 dev eth0.100
    10.0.2.1/24 dev eth0.200
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this example, virtual IP 10.0.1.1 is tied to VLAN 100 and 10.0.2.1 is tied to VLAN 200. &lt;/p&gt;
&lt;p&gt;If the physical router interfaces are present in the untagged VLAN 50 (example), the VRRP multicast traffic will only be observed in this VLAN.&lt;/p&gt;
&lt;p&gt;Some &lt;a href="http://comments.gmane.org/gmane.linux.keepalived.devel/3001"&gt;background information&lt;/a&gt; on working with VLANs and Keepalived.&lt;/p&gt;
&lt;h3&gt;Firewall configuration&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Update August 2018&lt;/em&gt;:&lt;/p&gt;
&lt;p&gt;I had problems running VRRP on Red Hat / CentOS. Since I use AH authentication, the protocol is not seen as VRRP but (as TCPDUMP shows) "AH". This is why you need to create a service for Firewalld and enable it for the appropriate zone.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a file called "VRRP.xml" in /etc/firewalld/services&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="cp"&gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;utf-8&amp;quot;?&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;&amp;lt;service&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;&amp;lt;short&amp;gt;&lt;/span&gt;VRRP&lt;span class="nt"&gt;&amp;lt;/short&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;&amp;lt;description&amp;gt;&lt;/span&gt;Virtual&lt;span class="w"&gt; &lt;/span&gt;Router&lt;span class="w"&gt; &lt;/span&gt;Redundancy&lt;span class="w"&gt; &lt;/span&gt;Protocol&lt;span class="nt"&gt;&amp;lt;/description&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="nt"&gt;&amp;lt;port&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="na"&gt;protocol=&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;ah&amp;quot;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="na"&gt;port=&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;&amp;quot;&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nt"&gt;&amp;lt;/service&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Enable VRRP (select the appropriate zone for your interface):&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;    sudo firewall-cmd --zone=public --permanent --add-service=VRRP
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Reload the configuration&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;    sudo firewall-cmd --reload
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;check that the service is active with:&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;    sudo firewall-cmd --zone=public --list-services
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Closing words&lt;/h3&gt;
&lt;p&gt;VRRP can provide a very simple solution to setup a high-availability router configuration. Security can be a real issue if untrusted devices reside in the same layer 2 network so implementing security with IPSEC-AH or network segmentation is recommended.  &lt;/p&gt;</content><category term="Networking"></category><category term="VRRP"></category></entry><entry><title>Systemd Forward Secure Sealing of system logs makes little sense</title><link href="https://louwrentius.com/systemd-forward-secure-sealing-of-system-logs-makes-little-sense.html" rel="alternate"></link><published>2014-11-22T12:00:00+01:00</published><updated>2014-11-22T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-11-22:/systemd-forward-secure-sealing-of-system-logs-makes-little-sense.html</id><summary type="html">&lt;p&gt;Systemd is a more modern replacement of sysvinit and its in the process of being integrated into most mainstream Linux distributions. I'm a bit troubled by one of it's features.&lt;/p&gt;
&lt;p&gt;I'd like to discuss the &lt;a href="https://plus.google.com/+LennartPoetteringTheOneAndOnly/posts/g1E6AxVKtyc"&gt;Forward Secure Sealing (FSS)&lt;/a&gt; feature  for log files that is part of systemd. FSS cryptographically …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Systemd is a more modern replacement of sysvinit and its in the process of being integrated into most mainstream Linux distributions. I'm a bit troubled by one of it's features.&lt;/p&gt;
&lt;p&gt;I'd like to discuss the &lt;a href="https://plus.google.com/+LennartPoetteringTheOneAndOnly/posts/g1E6AxVKtyc"&gt;Forward Secure Sealing (FSS)&lt;/a&gt; feature  for log files that is part of systemd. FSS cryptographically signs the local system logs, so you can check if log files have been altered. This should make it more difficult for an attacker to hide his or her tracks. &lt;/p&gt;
&lt;p&gt;Regarding log files, an attacker can do two things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;delete them&lt;/li&gt;
&lt;li&gt;alter them (remove / change incriminating lines)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The FSS feature does not prevent any of these risks. But it does help you detect that there is something fishy going on if you would verify the signatures regularly. So basically FSS acts a bit like Tripwire. &lt;/p&gt;
&lt;p&gt;FSS can only tell you wether or not a log file has been changed. It cannot tell you anything else. More specifically, it cannot tell you the reason why. So I wonder how valuable this feature is.&lt;/p&gt;
&lt;p&gt;There is also something else. Signing (sealing) a log file is done &lt;a href="http://lwn.net/Articles/512895/"&gt;every 15 minutes by default&lt;/a&gt;. This gives an attacker ample time to alter or delete the most recent log events, often exactly those events that need to be altered/deleted. Even lowering this number to 10 seconds would allow an attacker to delete (some) initial activities using automation. So how useful is this?&lt;/p&gt;
&lt;p&gt;What may help in determining what happened to a system is the unaltered log contents themselves. What FSS cannot do by principle is protect the actual contents of the log file. If you want to preserve log events the only secure option is to send them to an external log host (assumed not accessible by an attacker).&lt;/p&gt;
&lt;p&gt;However, to my surprise, &lt;a href="https://plus.google.com/+LennartPoetteringTheOneAndOnly/posts/g1E6AxVKtyc"&gt;FSS is presented as an alternative to external logging&lt;/a&gt;. Quote from Lennart Poettering:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Traditionally this problem has been dealt with by having an external secured log server 
to instantly log to, or even a local line printer directly connected to the log system. 
But these solutions are more complex to set up, require external infrastructure and have 
certain scalability problems. With FSS we now have a simple alternative that works without 
any external infrastructure.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This quote is quite troubling because it fails to acknowledge one of the raison d'être of external log hosts. It seems to suggest that FSS provides an alternative for external logging, where in fact it does not and cannot do so on principle. It can never address the fact that an attacker can alter or delete logs, whereas external logging can mitigate this risk. &lt;/p&gt;
&lt;p&gt;It seems to me that systemd now also wants to play the role as some crude intrusion detection system. It feels a bit like scope creep to me. &lt;/p&gt;
&lt;p&gt;Personally I just wonder what more useful features could have been implemented instead of allowing you to transfer a log file verification key using a QR code to your smartphone (What the hell?). &lt;/p&gt;
&lt;p&gt;This whole observation is not original, in the comments of the systemd author's blogpost, the same argument is made by Andrew Wyatt (two years earlier). The response from the systemd author was to block him. (see the comments of Lennart Poettering's blogpost I linked to earlier). &lt;/p&gt;
&lt;p&gt;Update: Andrew Wyatt behaved a bit immature towards Lennart Poettering at first so I understand some resentment from his side, but Andrews criticism was valid and never addressed by him.&lt;/p&gt;
&lt;p&gt;If the systemd author would just have implemented sending log events to an external log server, that would have been way more useful security-wise, I think. Until then, &lt;a href="http://stackoverflow.com/questions/23082512/coreos-systemd-journal-remote-logging"&gt;this may do&lt;/a&gt;...&lt;/p&gt;</content><category term="Security"></category><category term="Logging"></category></entry><entry><title>Getting the Sitecom AC600 Wi-Fi adapter running on Linux</title><link href="https://louwrentius.com/getting-the-sitecom-ac600-wi-fi-adapter-running-on-linux.html" rel="alternate"></link><published>2014-11-01T12:00:00+01:00</published><updated>2014-11-01T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-11-01:/getting-the-sitecom-ac600-wi-fi-adapter-running-on-linux.html</id><summary type="html">&lt;p&gt;&lt;strong&gt;TL;DR Yes it works with some modifications of the driver source.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A USB Wi-Fi adapter I used with a Raspberry Pi broke as I dropped it on the floor, so I had to replace it. I just went to a local shop and bought the &lt;a href="https://www.sitecom.com/en/wi-fi-usb-adapter-ac600/wla-3100/p/1635"&gt;Sitecom AC600&lt;/a&gt; adapter as …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;strong&gt;TL;DR Yes it works with some modifications of the driver source.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;A USB Wi-Fi adapter I used with a Raspberry Pi broke as I dropped it on the floor, so I had to replace it. I just went to a local shop and bought the &lt;a href="https://www.sitecom.com/en/wi-fi-usb-adapter-ac600/wla-3100/p/1635"&gt;Sitecom AC600&lt;/a&gt; adapter as that's what they had available (with support for 5Ghz networking). &lt;/p&gt;
&lt;p&gt;I had some hope that I would just plug it in and it would 'just work™'. But no. Linux. In the end, the device cost me 30 euro's including taxes, but the time spend to get it to work may have made this a &lt;em&gt;very&lt;/em&gt; expensive USB Wi-Fi dongle. And it's funny to think about the fact that the Wi-Fi dongle is almost the same price as the Raspberry Pi board itself.&lt;/p&gt;
&lt;p&gt;But I did get it working and I'd like to show you how.&lt;/p&gt;
&lt;p&gt;It started with a google for 'sitecom ac600 linux' which landed me on &lt;a href="https://wikidevi.com/wiki/Sitecom_WLA-3100"&gt;this page&lt;/a&gt;. This page told me the device uses a MediaTek chipset (MT7610U). &lt;/p&gt;
&lt;p&gt;So you need to download the &lt;a href="http://www.mediatek.com/en/downloads/"&gt;driver from MediaTek&lt;/a&gt;. Here is a &lt;a href="http://s3.amazonaws.com/mtk.cfs/Downloads/linux/mt7610u_wifi_sta_v3002_dpo_20130916.tar.bz2"&gt;direct link&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;So you may do something like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cd /usr/src
wget http://s3.amazonaws.com/mtk.cfs/Downloads/linux/mt7610u_wifi_sta_v3002_dpo_20130916.tar.bz2
tar xjf mt7610u_wifi_sta_v3002_dpo_20130916.tar.bz2
cd mt7610u_wifi_sta_v3002_dpo_20130916
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now you would hope that it's just like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;make
make install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And we're happy right? Linux FTW! Well, NO! We're using Linux so we have to work for stuff that works right out of the box on Windows and Mac OS.&lt;/p&gt;
&lt;p&gt;So we first start with editing "include/os/rt_linux.h" and go to line ~279. There we make sure that we edit the struct like this: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;    typedef struct _OS_FS_INFO_
 {
    kuid_t              fsuid;
    kgid_t              fsgid;
    mm_segment_t    fs;
 } OS_FS_INFO;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Basically, the words int are replaced by kuid_t and kgid_t, or else, compilation will abort with an error. &lt;/p&gt;
&lt;p&gt;Ofcourse, the Sitecom AC600 has an USB identifier that is unknown to the driver, so after compilation, it still doesn't work. &lt;/p&gt;
&lt;p&gt;lsusb output: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Bus 001 Device 004: ID 0df6:0075 Sitecom Europe B.V.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So google landed me on &lt;a href="http://ubuntuforums.org/showthread.php?t=2228244&amp;amp;page=2&amp;amp;p=13100663#post13100663"&gt;this&lt;/a&gt; nice thread by 'praseodym' that explained the remaining steps. I stole the info below from this thread.&lt;/p&gt;
&lt;p&gt;So while we are in the source directory of the module, we are going to edit "common/rtusb_dev_id.c" and add &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;{USB_DEVICE(0x0DF6,0x0075)}, /* MT7610U */
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So this will make the AC600 gets recognised by the driver. Now we also need to edit "os/linux/confik.mk" and change these lines like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;HAS_WPA_SUPPLICANT=y
HAS_NATIVE_WPA_SUPPLICANT_SUPPORT=y
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So no, we are still not ready yet. I'm not 100 percent sure that this is required anymore, but I found &lt;a href="http://superuser.com/questions/738096/how-to-install-mediatek-mt7610u-rt2860-driver"&gt;this nice thread in Italian&lt;/a&gt; and a very small comment by 'shoe rat' tucked away at the end that may make the difference between a working device or not.&lt;/p&gt;
&lt;p&gt;We need to edit the file "os/linux/config.mk" and go to line ~663. Then, around that line, change&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;CHIPSET_DAT = 2860
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;to:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;CHIPSET_DAT = 2870
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Yes. Finally! Now you can do:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;make
make install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Imagine that such a 'make' takes about 20 minutes on a Raspbery Pi. No joke.&lt;/p&gt;
&lt;p&gt;Now you can either do this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;modprobe mt7650u_sta
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You should see something like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@raspberrypi:/usr/src# lsmod
Module                  Size  Used by
snd_bcm2835            16181  0 
snd_pcm                63684  1 snd_bcm2835
snd_page_alloc          3604  1 snd_pcm
snd_seq                43926  0 
snd_seq_device          4981  1 snd_seq
snd_timer              15936  2 snd_pcm,snd_seq
snd                    44915  5 snd_bcm2835,snd_timer,snd_pcm,snd_seq,snd_seq_device
soundcore               4827  1 snd
mt7650u_sta           895786  1 
pl2303                  7951  0 
usbserial              19536  1 pl2303
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You should be able to see a 'ra0' device when entering ifconfig -a or iwconfig and just configure it like any wireless device (out-of-scope). &lt;/p&gt;
&lt;p&gt;So once up-and-running, the Sitecom AC600 works fine under Linux and even sees and connects to 5 GHz networks. But not without a caveat of-course. I needed to configure a 5 GHz channel below 100 (I chose 48) on my  Apple Airport Extreme, or the Wi-Fi dongle would not see the 5GHz network and would not be able to connect to it.&lt;/p&gt;
&lt;p&gt;So I hope somebody else is helped by this information.&lt;/p&gt;</content><category term="Networking"></category><category term="Wi-Fi"></category></entry><entry><title>The ZFS Event Daemon on Linux</title><link href="https://louwrentius.com/the-zfs-event-daemon-on-linux.html" rel="alternate"></link><published>2014-08-29T23:00:00+02:00</published><updated>2014-08-29T23:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-08-29:/the-zfs-event-daemon-on-linux.html</id><summary type="html">&lt;p&gt;If something goes wrong with my zpool, I'd like to be notified by email. On Linux using MDADM, the MDADM daemon took care of that.&lt;/p&gt;
&lt;p&gt;With the release of &lt;a href="https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-announce/Lj7xHtRVOM4"&gt;ZoL 0.6.3&lt;/a&gt;, a brand new 'ZFS Event Daemon' or &lt;a href="https://www.youtube.com/watch?v=y7Yp2L6c2KM"&gt;ZED&lt;/a&gt; has been introduced. &lt;/p&gt;
&lt;p&gt;I could not find much information …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If something goes wrong with my zpool, I'd like to be notified by email. On Linux using MDADM, the MDADM daemon took care of that.&lt;/p&gt;
&lt;p&gt;With the release of &lt;a href="https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-announce/Lj7xHtRVOM4"&gt;ZoL 0.6.3&lt;/a&gt;, a brand new 'ZFS Event Daemon' or &lt;a href="https://www.youtube.com/watch?v=y7Yp2L6c2KM"&gt;ZED&lt;/a&gt; has been introduced. &lt;/p&gt;
&lt;p&gt;I could not find much information about it, so consider this article my notes on this new service.&lt;/p&gt;
&lt;p&gt;If you want to receive alerts there is only one requirement: you must setup an MTA on your machine and that is outside the scope of this article.&lt;/p&gt;
&lt;p&gt;When you install ZoL, the ZED daemon is installed automatically and will start on boot. &lt;/p&gt;
&lt;p&gt;The configuration file for ZED can be found here: &lt;em&gt;/etc/zfs/zed.d/zed.rc&lt;/em&gt;. Just uncomment the "ZED_EMAIL=" section and fill out your email address. Don't forget to restart the service.&lt;/p&gt;
&lt;p&gt;ZED seems to hook into the zpool event log that is kept in the kernel and monitors these events in real-time.&lt;/p&gt;
&lt;p&gt;You can see those events yourself:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@debian:/etc/zfs/zed.d# zpool events
TIME                           CLASS
Aug 29 2014 16:53:01.872269662 resource.fs.zfs.statechange
Aug 29 2014 16:53:01.873291940 resource.fs.zfs.statechange
Aug 29 2014 16:53:01.962528911 ereport.fs.zfs.config.sync
Aug 29 2014 16:58:40.662619739 ereport.fs.zfs.scrub.start
Aug 29 2014 16:58:40.670865689 ereport.fs.zfs.checksum
Aug 29 2014 16:58:40.671888655 ereport.fs.zfs.checksum
Aug 29 2014 16:58:40.671905612 ereport.fs.zfs.checksum
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can see that a scrub was started and that incorrect checksums were discovered. A few seconds later I received an email:&lt;/p&gt;
&lt;p&gt;The first email:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;A ZFS checksum error has been detected:

  eid: 5
 host: debian
 time: 2014-08-29 16:58:40+0200
 pool: storage
 vdev: disk:/dev/sdc1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And soon thereafter:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;A ZFS pool has finished scrubbing:

  eid: 908
 host: debian
 time: 2014-08-29 16:58:51+0200
 pool: storage
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using &amp;#39;zpool clear&amp;#39; or replace the device with &amp;#39;zpool replace&amp;#39;.
  see: http://zfsonlinux.org/msg/ZFS-8000-9P
 scan: scrub repaired 100M in 0h0m with 0 errors on Fri Aug 29 16:58:51 2014
config:

    NAME        STATE     READ WRITE CKSUM
    storage     ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sdb     ONLINE       0     0     0
        sdc     ONLINE       0     0   903

errors: No known data errors
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Awesome!&lt;/p&gt;
&lt;p&gt;The ZED daemon executes commands based on the &lt;em&gt;event class&lt;/em&gt;. So it can do more than just send emails, you can customise different actions based on the event class. The event class can be seen in the &lt;em&gt;zpool events&lt;/em&gt; output.&lt;/p&gt;
&lt;p&gt;One of the more interesting features is automatic replacement of a defect drive with a hot spare, so full fault tolerance is restored as soon as possible. &lt;/p&gt;
&lt;p&gt;I've not been able to get this to work. The ZED scripts would not automatically replace a failed/faulted drive. &lt;/p&gt;
&lt;p&gt;There seem to be some &lt;a href="https://github.com/zfsonlinux/zfs/pull/2085"&gt;known issues&lt;/a&gt;. The fixes seem to be in a pending pull request.&lt;/p&gt;
&lt;p&gt;Just to make sure I got alerted, I've simulated the ZED configuration for my production environment in a VM. &lt;/p&gt;
&lt;p&gt;I simulated a drive failure with dd as stated earlier, but the result was that for every checksum error I received one email. With thousands of checksum errors, I had to clear 1000+ emails from my inbox. &lt;/p&gt;
&lt;p&gt;It seems that this option, which is uncommented by default, was not enabled.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ZED_EMAIL_INTERVAL_SECS=&amp;quot;3600&amp;quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This option implements a cool-down period where an event is just reported once and suppressed afterwards until the interval expires. &lt;/p&gt;
&lt;p&gt;It would be best if this option would be enabled by default.&lt;/p&gt;
&lt;p&gt;The ZED authors acknowledge that ZED is a bit rough around the edges, but it sends out alerts consistently and that's what I was looking for, so I'm happy.&lt;/p&gt;</content><category term="ZFS"></category><category term="ZFS event daemon"></category></entry><entry><title>Installation of ZFS on Linux hangs on Debian Wheezy</title><link href="https://louwrentius.com/installation-of-zfs-on-linux-hangs-on-debian-wheezy.html" rel="alternate"></link><published>2014-08-29T12:00:00+02:00</published><updated>2014-08-29T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-08-29:/installation-of-zfs-on-linux-hangs-on-debian-wheezy.html</id><summary type="html">&lt;p&gt;&lt;strong&gt;This article is no longer relevant.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;After a fresh net-install of Debian Wheezy, I was unable to compile the ZFS for Linux kernel module. I've installed &lt;em&gt;apt-get install build-essential&lt;/em&gt; but that wasn't enough.&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;apt-get install debian-zfs&lt;/em&gt; command would just hang. &lt;/p&gt;
&lt;p&gt;I noticed a 'configure' process and I killed it …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;strong&gt;This article is no longer relevant.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;After a fresh net-install of Debian Wheezy, I was unable to compile the ZFS for Linux kernel module. I've installed &lt;em&gt;apt-get install build-essential&lt;/em&gt; but that wasn't enough.&lt;/p&gt;
&lt;p&gt;The &lt;em&gt;apt-get install debian-zfs&lt;/em&gt; command would just hang. &lt;/p&gt;
&lt;p&gt;I noticed a 'configure' process and I killed it, and after a few seconds, the installer continued after spewing out this error:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Building initial module for 3.2.0-4-amd64
Error! Bad return status for module build on kernel: 3.2.0-4-amd64 (x86_64)
Consult /var/lib/dkms/zfs/0.6.3/build/make.log for more information.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So I ran ./configure manually inside the mentioned directory and then I got this error:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;checking for zlib.h... no
configure: error: in `/var/lib/dkms/zfs/0.6.3/build&amp;#39;:
configure: error: 
    *** zlib.h missing, zlib-devel package required
See `config.log&amp;#39; for more details
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So I ran &lt;em&gt;apt-get install zlib1g-dev&lt;/em&gt; and no luck:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;checking for uuid/uuid.h... no
configure: error: in `/var/lib/dkms/zfs/0.6.3/build&amp;#39;:
configure: error: 
    *** uuid/uuid.h missing, libuuid-devel package required
See `config.log&amp;#39; for more details
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I searched a bit online and then I found &lt;a href="http://osdir.com/ml/zfs-discuss/2014-04/msg00330.html"&gt;this link&lt;/a&gt; that listed some additional packages that may be missing and I installed them all with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get install zlib1g-dev uuid-dev libblkid-dev libselinux-dev parted
lsscsi wget
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This time the ./configure went fine and I could manually &lt;em&gt;make install&lt;/em&gt; the kernel module and import my existing pool.&lt;/p&gt;</content><category term="ZFS"></category><category term="ZFS Wheezy"></category></entry><entry><title>Please use ZFS with ECC Memory</title><link href="https://louwrentius.com/please-use-zfs-with-ecc-memory.html" rel="alternate"></link><published>2014-08-27T12:00:00+02:00</published><updated>2014-08-27T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-08-27:/please-use-zfs-with-ecc-memory.html</id><summary type="html">&lt;p&gt;In this blogpost I argue why it's strongly recommended to use ZFS with ECC memory when building a NAS. I would argue that if you do not use ECC memory, it's reasonable to also forgo on ZFS altogether and use any (legacy) file system that suits your needs.&lt;/p&gt;
&lt;h3&gt;Why ZFS …&lt;/h3&gt;</summary><content type="html">&lt;p&gt;In this blogpost I argue why it's strongly recommended to use ZFS with ECC memory when building a NAS. I would argue that if you do not use ECC memory, it's reasonable to also forgo on ZFS altogether and use any (legacy) file system that suits your needs.&lt;/p&gt;
&lt;h3&gt;Why ZFS?&lt;/h3&gt;
&lt;p&gt;Many people consider using ZFS when they are planning to build their own NAS.
This is for good reason: ZFS is an excellent choice for a NAS file system. There are many reasons why ZFS is such a fine choice, but the most important one is probably 'data integrity'. Data integrity was one of the &lt;a href="http://www.oracle.com/technetwork/server-storage/solaris11/documentation/oraclesolariszfsstoragemanagement-360232.pdf"&gt;primary design goals&lt;/a&gt; of ZFS.&lt;/p&gt;
&lt;p&gt;ZFS assures that any corrupt data served by the underlying storage system is either detected or - if possible - corrected by using checksums and parity. This is why ZFS is so interesting for NAS builders: it's OK to use inexpensive (consumer) hard drives and solid state drives and not worry about data integrity. &lt;/p&gt;
&lt;p&gt;I will not go into the details, but for completeness I will also state that ZFS can make the difference between losing an entire RAID array or just a few files, because of the way it handles read errors as compared to 'legacy' hardware/software RAID solutions.&lt;/p&gt;
&lt;h3&gt;Understanding ECC memory&lt;/h3&gt;
&lt;p&gt;&lt;a href="http://en.wikipedia.org/wiki/ECC_memory"&gt;ECC memory&lt;/a&gt; or Error Correcting Code memory, contains extra parity data so the integrity of the data in memory can be verified and even corrected. ECC memory can correct single bit errors and detect multiple bit errors per word&lt;sup id="fnref:word"&gt;&lt;a class="footnote-ref" href="#fn:word"&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;What's most interesting is how a system with ECC memory reacts to bit errors that cannot be corrected. Because it's how a system with ECC memory responds to uncorrectable bit errors that that makes all the difference in the world. &lt;/p&gt;
&lt;p&gt;If multiple bits are corrupted within a single word, the CPU will detect the errors, but will not be able to correct them. When the CPU notices that there are uncorrectable bit errors in memory, it will 
generate an &lt;a href="http://en.wikipedia.org/wiki/Machine-check_exception"&gt;MCE&lt;/a&gt; that will be handled by the operating system. In most cases, this will result in a &lt;em&gt;halt&lt;/em&gt;&lt;sup id="fnref:halt"&gt;&lt;a class="footnote-ref" href="#fn:halt"&gt;2&lt;/a&gt;&lt;/sup&gt; of the system.&lt;/p&gt;
&lt;p&gt;This behaviour will lead to a system crash, but it prevents data corruption. It prevents the bad bits from being processed by the operating system and/or applications where it may wreak havoc.&lt;/p&gt;
&lt;p&gt;ECC memory is standard on all server hardware sold by all major vendors like HP, Dell, IBM, Supermicro and so on. This is for good reason, because &lt;a href="http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf"&gt;memory errors are the norm, not the exception&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;The question is really why not &lt;em&gt;all&lt;/em&gt; computers, including desktop and laptops, use ECC memory instead of non-ECC memory. The most important reason seems to be 'cost'. &lt;/p&gt;
&lt;p&gt;It is more expensive to use ECC memory than non-ECC memory. This is not only because ECC memory itself is more expensive. ECC memory requires a motherboard with support for ECC memory, and these motherboards tend to be more expensive as well. &lt;/p&gt;
&lt;p&gt;non-ECC Memory is reliable enough that you won't have an issue most of the time. And when it does go wrong, you just blame Microsoft or Apple&lt;sup id="fnref:desktopecc"&gt;&lt;a class="footnote-ref" href="#fn:desktopecc"&gt;3&lt;/a&gt;&lt;/sup&gt;. For desktops, the impact of a memory failure is less of an issue than on servers. But remember, your NAS is your own (home) server. There is some &lt;a href="http://research.microsoft.com/pubs/144888/eurosys84-nightingale.pdf"&gt;evidence&lt;/a&gt; that memory errors are abundant&lt;sup id="fnref:mstudy"&gt;&lt;a class="footnote-ref" href="#fn:mstudy"&gt;4&lt;/a&gt;&lt;/sup&gt; on desktop systems. &lt;/p&gt;
&lt;p&gt;The price difference is small enough not to be relevant for businesses, but for the price-conscious consumer, it is a factor. A system based on ECC memory may cost in the range of $150 - $200 more than a system based on non-ECC memory.&lt;/p&gt;
&lt;p&gt;It's up to you if you want to spend this extra money. Why you are advised to do so will be discussed in the next paragraphs.&lt;/p&gt;
&lt;h3&gt;Why ECC memory is important to ZFS&lt;/h3&gt;
&lt;p&gt;ZFS trusts the contents of memory blindly. Please note that ZFS has no mechanisms to cope with bad memory. It is similar to every other file system in this regard. &lt;a href="http://research.cs.wisc.edu/wind/Publications/zfs-corruption-fast10.pdf"&gt;Here is a nice paper&lt;/a&gt; about ZFS and how it handles corrupt memory (it doesnt!).&lt;/p&gt;
&lt;p&gt;In the best case, bad memory corrupts file data and causes a few garbled files. In the worst case, bad memory mangles in-memory ZFS file system (meta) data structures, which may lead to corruption and thus loss of the entire zpool.&lt;/p&gt;
&lt;p&gt;It is important to put this into perspective. There is only a &lt;em&gt;practical&lt;/em&gt; reason why ECC memory is &lt;em&gt;more important&lt;/em&gt; for ZFS as compared to other file systems. Conceptually, ZFS does not require ECC memory any more as any other file system. &lt;/p&gt;
&lt;p&gt;Or let &lt;a href="http://www.open-zfs.org/wiki/User:Mahrens"&gt;Matthew Ahrens&lt;/a&gt;, the co-founder of the ZFS project &lt;a href="http://arstechnica.com/civis/viewtopic.php?f=2&amp;amp;t=1235679&amp;amp;p=26303271#p26303271"&gt;phrase it&lt;/a&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;There&amp;#39;s nothing special about ZFS that requires/encourages the use of ECC RAM more so than 
any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now this is the important part. File systems such as NTFS, EXT4, etc &lt;em&gt;have (data recovery) tools&lt;/em&gt; that may allow you to rescue your files when things go bad due to bad memory. ZFS does not have such tools, if the pool is corrupt, all data must be considered lost, there is no option for recovery. &lt;/p&gt;
&lt;p&gt;So the impact of bad memory can be more devastating on a system with ZFS than on a system with NTFS, EXT4, XFS, etcetera. ZFS may force you to restore your data from backups sooner. Oh by the way, you, make backups right?&lt;/p&gt;
&lt;p&gt;I do have a personal concern&lt;sup id="fnref:concern"&gt;&lt;a class="footnote-ref" href="#fn:concern"&gt;5&lt;/a&gt;&lt;/sup&gt;. I have nothing to substantiate this, but my thinking is that since ZFS is a way more advanced and complex file system, it may be more susceptible to the adverse effects of bad memory, compared to legacy file systems. &lt;/p&gt;
&lt;h3&gt;ZFS, ECC memory and data integrity&lt;/h3&gt;
&lt;p&gt;The main reason for using ZFS over legacy file systems is the ability to assure data integrity. But ZFS is only one piece of the data integrity puzzle. The other part of the puzzle is ECC memory. &lt;/p&gt;
&lt;p&gt;ZFS covers the risk of your storage subsystem serving corrupt data. ECC memory covers the risk of corrupt memory. If you leave any of these parts out, you are compromising data integrity.&lt;/p&gt;
&lt;p&gt;If you care about data integrity, you need to use ZFS in combination with ECC memory. If you don't care that much about data integrity, it doesn't really matter if you use either ZFS or ECC memory.&lt;/p&gt;
&lt;p&gt;Please remember that ZFS was developed to assure data integrity in a corporate IT environment, where data integrity is top priority and ECC-memory in servers is the norm, a fundament, on wich ZFS has been build. ZFS is not some magic pixie dust that protects your data under all circumstances. If its requirements are not met, data integrity is not assured.&lt;/p&gt;
&lt;p&gt;ZFS may be free, but data integrity and availability isn't. We spend money on extra hard drives so we can run RAID(Z) and lose one or more hard drives without losing our data. And we have to spend money on ECC-memory, to assure bad memory doesn't have a similar impact. &lt;/p&gt;
&lt;p&gt;This is a bit of an appeal to authority and not to data or reason but I think it's still relevant. &lt;a href="http://www.freenas.org/whats-new/2015/02/a-complete-guide-to-freenas-hardware-design-part-i-purpose-and-best-practices.html"&gt;FreeNAS&lt;/a&gt; is a vendor of a NAS solution that uses ZFS as its foundation. &lt;/p&gt;
&lt;p&gt;They have this to say about ECC memory:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;However if a non-ECC memory module goes haywire, it can cause irreparable damage to your ZFS pool that can cause complete loss of the storage.
...
If it’s imperative that your ZFS based system must always be available, ECC RAM is a requirement. If it’s only some level of annoying (slightly, moderately…) that you need to restore 
your ZFS system from backups, non-ECC RAM will fit the bill.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Hopefully your backups won't contain corrupt data. If you make backups of all data in the first place.&lt;/p&gt;
&lt;p&gt;Many home NAS builders won't be able to afford to backup all data on their NAS, only the most critical data. For example, if you store a large collection of video files, you may accept the risk that you may have to redownload everything. If you can't accept that risk ECC memory is a must. If you are OK with such a scenario, non-ECC memory is OK and you can save a few bucks. It all depends on your needs.&lt;/p&gt;
&lt;p&gt;The risks faced in a business environment don't magically disapear when you apply the same technology at home. The main difference between a business setting and your home is the scale of operation, nothing else. The risks are still relevant and real. &lt;/p&gt;
&lt;p&gt;Things break, it's that simple. And although you may not face the same chances of getting affected by it based on the smaller scale at which you operate at home, your NAS is probably not placed in a temperature and humidity controlled server room. As the temperature rises, so does the risk of memory errors&lt;sup id="fnref:bitsquatting"&gt;&lt;a class="footnote-ref" href="#fn:bitsquatting"&gt;6&lt;/a&gt;&lt;/sup&gt;. And remember, memory may develop spontaneous and temporary defects (random bitflips). If your system is powered on 24/7, there is a higher chance that such a thing will happen.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Personally, I think that even for a home NAS, it's best to use ECC memory regardless if you use ZFS. It makes for a more stable hardware platform. If money is a real constraint, it's better to take a look at AMD's offerings then to skip on ECC memory. It's important that if you select AMD hardware, that you make sure that both CPU and motherboard support ECC and that it is reported to be working.&lt;/p&gt;
&lt;p&gt;Still, if you decide to use non-ECC memory with ZFS: as long as you are aware of the risks outlined in this blog post and you're OK with that, fine. It's your data and you must decide for yourself what kind of protection and associated cost is reasonable for you.&lt;/p&gt;
&lt;p&gt;When people seek advice on their NAS builds, ECC memory should always be recommended. I think that nobody should create the impression that it's 'safe' for home use not to use ECC RAM purely seen from a technical and data integrity standpoint. People must understand that they are taking a risk. But there is a significant chance that they will never experience problems, but there is no guarantee. Do they accept the consequences if it does go wrong?&lt;/p&gt;
&lt;p&gt;If data integrity is not that important - because the data itself is not critical - I find it &lt;em&gt;perfectly reasonable&lt;/em&gt; that people may decide not to use ECC memory and save a few hundred dollars. In that case, it would &lt;em&gt;also&lt;/em&gt; be perfectly reasonable not to use ZFS either, which also may allow them other file system and RAID options that may better suit their particular needs. &lt;/p&gt;
&lt;h3&gt;Questions and answers&lt;/h3&gt;
&lt;p&gt;Q: When I bought my non-ECC memory, I ran memtest86+ and no errors were found, even after a burn-in tests. So I think I'm safe.&lt;/p&gt;
&lt;p&gt;A: No. A memory test with memtest86+ is just a snapshot in time. At that time, when you ran the test, you had the assurance that memory was fine. It could have gone bad right now while you are reading these words. And could be corrupting your data as we speak. So running memtest86+ frequently doesn't really buy you much.&lt;/p&gt;
&lt;p&gt;Q: Dit you see that &lt;a href="http://blog.brianmoses.net/2014/03/why-i-chose-non-ecc-ram-for-my-freenas.html"&gt;article by Brian Moses?&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A: yes, and I disagree with his views, but I really appreciate the fact that he emphasises that you should really be aware of the risks involved and decide for &lt;em&gt;yourself&lt;/em&gt; what suits your situation. A few points that are not OK in my opinion:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Every bad stick of RAM I’ve experienced came to me that way from the factory and could be found via some burn-in testing.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I've seen some consumer equipment in my life time that suddenly developed memory errors after years of perfect operation. This is argument from personal anekdote should not be used as a basis for decision making. Remember: memory errors are the norm, not the exception. Even at home. Things break, it's that simple. And having equipment running 24/7 doesn't help.&lt;/p&gt;
&lt;p&gt;Furthermore, Brian seems to think that you can mitigate the risk of non-ECC memory by spending money on other stuff, such as off-site backups. Brian himself links to an article that &lt;a href="http://nex7.blogspot.nl/2014/03/ecc-vs-non-ecc-ram-great-debate.html"&gt;rebutes his position on this&lt;/a&gt;. Just for completeness: How valuable is a backup of corrupted data? How do you know which data was corrupted? ZFS won't save you here.&lt;/p&gt;
&lt;p&gt;Q: Should I use ZFS on my laptop or desktop?&lt;/p&gt;
&lt;p&gt;A: Running ZFS on your desktop or laptop is an entirely different use case as compared to a NAS. I see no problems with this, I don't think this discussion applies to desktop/laptop usage. Especially because you are probably creating regular backups of your data to your NAS or a cloud service, right? If there are any memory errors, you will notice soon enough.&lt;/p&gt;
&lt;h3&gt;Updates&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Updated on August 11, 2015 to reflect that ZFS was not designed with ECC in mind. In this regard, it doesn't differ from other file systems.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Updated on April 3rd, 2015 - rewrote large parts of the whole article, to make it a better read.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Updated on January 18th, 2015 - rephrased some sentences. Changed the paragraph 'Inform people and give them a choice' to argue when it would be reasonable not to use ECC memory. Furthermore, I state more explicitly that ZFS itself has no mechanisms to cope with bad RAM.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Updated on February 21th, 2015 - I substantially rewrote this article to give a better perspective on the ZFS + ECC 'debate'.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:word"&gt;
&lt;p&gt;On x64 processors, the size of a &lt;a href="http://en.wikipedia.org/wiki/Word_(computer_architecture)"&gt;word is 64 bits&lt;/a&gt;.&amp;#160;&lt;a class="footnote-backref" href="#fnref:word" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:halt"&gt;
&lt;p&gt;Windows will generate a "blue screen of death" and Linux will generate a "kernel panic".&amp;#160;&lt;a class="footnote-backref" href="#fnref:halt" title="Jump back to footnote 2 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:desktopecc"&gt;
&lt;p&gt;It is very likely that the computer you're using (laptop/desktop) encountered a memory issue this year, but there is no way you can tell. Consumer hardware doesn't have any mechanisms to detect and report memory errors.&amp;#160;&lt;a class="footnote-backref" href="#fnref:desktopecc" title="Jump back to footnote 3 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:mstudy"&gt;
&lt;p&gt;Microsoft has performed a &lt;a href="http://research.microsoft.com/pubs/144888/eurosys84-nightingale.pdf"&gt;study&lt;/a&gt; on one million crash reports they received over a period of 8 months on roughly a million systems in 2008. The result is a 1 in 1700 failure rate for single-bit memory errors in kernel code pages (a tiny subset of total memory).&lt;/p&gt;
&lt;p&gt;:::text
&lt;em&gt;A consequence of confining our analysis to kernel code pages is that we will miss DRAM failures in the vast majority of memory. On a typical machine kernel code pages occupy roughly 30 MB of memory, which is 1.5% of the memory on the average system in our study. [...] since we are capturing DRAM errors in only 1.5% of the address space, it is possible that DRAM error rates across all of DRAM may be far higher than what we have observed.&lt;/em&gt;&amp;#160;&lt;a class="footnote-backref" href="#fnref:mstudy" title="Jump back to footnote 4 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:concern"&gt;
&lt;p&gt;I did not come up with this argument myself.&amp;#160;&lt;a class="footnote-backref" href="#fnref:concern" title="Jump back to footnote 5 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:bitsquatting"&gt;
&lt;p&gt;The absolutely facinating concept of bitsquatting proved that hotter datacenters &lt;a href="https://www.youtube.com/watch?v=aT7mnSstKGs"&gt;showed more bitflips&lt;/a&gt;&amp;#160;&lt;a class="footnote-backref" href="#fnref:bitsquatting" title="Jump back to footnote 6 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="ZFS"></category><category term="ZFS"></category><category term="ECC"></category></entry><entry><title>71 TiB DIY NAS based on ZFS on Linux</title><link href="https://louwrentius.com/71-tib-diy-nas-based-on-zfs-on-linux.html" rel="alternate"></link><published>2014-08-02T21:08:00+02:00</published><updated>2014-08-02T21:08:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-08-02:/71-tib-diy-nas-based-on-zfs-on-linux.html</id><summary type="html">&lt;p&gt;This is my new 71 TiB DIY NAS. This server is the successor to my six year old, twenty drive &lt;a href="https://louwrentius.com/20-disk-18-tb-raid-6-storage-based-on-debian-linux.html"&gt;18 TB NAS (17 TiB)&lt;/a&gt;. With a storage capacity four times higher than the original and an incredible read (2.5 GB/s)/write (1.9 GB/s) performance, it's …&lt;/p&gt;</summary><content type="html">&lt;p&gt;This is my new 71 TiB DIY NAS. This server is the successor to my six year old, twenty drive &lt;a href="https://louwrentius.com/20-disk-18-tb-raid-6-storage-based-on-debian-linux.html"&gt;18 TB NAS (17 TiB)&lt;/a&gt;. With a storage capacity four times higher than the original and an incredible read (2.5 GB/s)/write (1.9 GB/s) performance, it's a worthy successor. &lt;/p&gt;
&lt;p&gt;&lt;img alt="zfs nas" src="https://louwrentius.com/static/images/zfsnas01.jpg" /&gt;&lt;/p&gt;
&lt;h3&gt;Purpose&lt;/h3&gt;
&lt;p&gt;The purpose of this machine is to store backups and media, primarily video.&lt;/p&gt;
&lt;h3&gt;The specs&lt;/h3&gt;
&lt;table border="0" cellpadding="5" cellspacing="1" &gt;
&lt;tr&gt;&lt;th&gt;Part&lt;/th&gt;&lt;th&gt;Description &lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Case&lt;/td&gt;&lt;td &gt;Ri-vier RV-4324-01A&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Processor&lt;/td&gt;&lt;td &gt;Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;RAM&lt;/td&gt;&lt;td &gt;16 GB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Motherboard&lt;/td&gt;&lt;td &gt;Supermicro X9SCM-F&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;LAN&lt;/td&gt;&lt;td &gt;Intel Gigabit &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Storage Connectivity&lt;/td&gt;&lt;td &gt;&lt;strike&gt;InfiniBand MHGA28-XTC&lt;/strike&gt; 2023: Mellanox ConnectX-3 Pro 10Gbit Ethernet (X312B-XCCT)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;PSU&lt;/td&gt;&lt;td &gt;Seasonic Platinum 860&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Controller&lt;/td&gt;&lt;td &gt; 3 x IBM M1015&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Disk&lt;/td&gt;&lt;td &gt;24 x HGST HDS724040ALE640 4 TB (7200RPM) &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;SSD&lt;/td&gt;&lt;td &gt;2 x Crucial M500 120GB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Arrays&lt;/td&gt;&lt;td &gt;Boot: 2 x 60 GB RAID 1 and storage: 18 disk RAIDZ2+ 6 disk RAIDZ2 &lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Brutto storage&lt;/td&gt;&lt;td &gt; 86 TiB (96 TB)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Netto storage&lt;/td&gt;&lt;td &gt;71 TiB (78 TB)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;OS&lt;/td&gt;&lt;td &gt;&lt;strike&gt;Linux Debian Wheezy&lt;/strike&gt;2023: Ubuntu 22.04&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Filesystem&lt;/td&gt;&lt;td &gt;ZFS&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Rebuild time&lt;/td&gt;&lt;td &gt;Depends on amount of data (rate is 4 TB/Hour)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;UPS&lt;/td&gt;&lt;td &gt;Back-UPS RS 1200 LCD using Apcupsd&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Power usage&lt;/td&gt;&lt;td &gt;about &amp;nbsp;200 Watt idle &lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;

&lt;p&gt;&lt;img alt="front" src="https://louwrentius.com/static/images/nano/topview.jpg" /&gt;
&lt;img alt="front" src="https://louwrentius.com/static/images/nano/4cards.jpg" /&gt;
&lt;img alt="front" src="https://louwrentius.com/static/images/nano/backside.jpg" /&gt;&lt;/p&gt;
&lt;iframe width="560" height="315" src="//www.youtube.com/embed/LS3cfl-7n-4" frameborder="0" allowfullscreen&gt;&lt;/iframe&gt;

&lt;h3&gt;CPU&lt;/h3&gt;
&lt;p&gt;The Intel Xeon E3-1230 V2 is not the latest generation but one of the cheapest Xeons you can buy and it supports ECC memory. It's a quad-core processor with hyper-threading. &lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E3-1230+V2+%40+3.30GHz"&gt;Here&lt;/a&gt; you can see how it performs compared to other processors.&lt;/p&gt;
&lt;h3&gt;Memory&lt;/h3&gt;
&lt;p&gt;The system has 16 GB ECC RAM. Memory is relatively cheap these days but I don't have any reason to upgrade to 32 GB. I think that 8 GB would have been fine with this system.&lt;/p&gt;
&lt;h3&gt;Motherboard&lt;/h3&gt;
&lt;p&gt;The server is build around the &lt;a href="https://louwrentius.com/an-affordable-server-platform-based-on-server-grade-hardware.html"&gt;SuperMicro X95SCM-F motherboard&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is a server-grade motherboard and comes with typical features you might expect, like ECC memory support and out-of-band management (IPMI). &lt;/p&gt;
&lt;p&gt;&lt;img alt="smboard top view" src="https://www.supermicro.nl/a_images/products/Xeon/C204/X9SCM-F_spec.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;This motherboard has four PCIe slots (2 x 8x and 2 x 4x) in an 8x physical slot. My build requires four PCIe 4x+ slots and there aren't (m)any other server boards at this price point that support four PCIe slots in a 8x sized slot.&lt;/p&gt;
&lt;h3&gt;The chassis&lt;/h3&gt;
&lt;p&gt;The chassis has six rows of four drive bays that are kept cool by three 120mm fans in a fan wall behind the drive bays. At the rear of the case, there are two 'powerful' 80mm fans that remove the heat from the case, together with the PSU.&lt;/p&gt;
&lt;p&gt;The chassis has six SAS backplanes that connect four drives each. The backplanes have dual molex power connectors, so you can put redundant power supplies into the chassis. Redundant power supplies are more expensive and due to their size, often have smaller, thus noisier fans. As this is a home build, I opted for just a single regular PSU.&lt;/p&gt;
&lt;p&gt;When facing the front, there is a place at the left side of the chassis to mount a single 3.5 inch or two 2.5 inch drives next to each other as boot drives. I've
mounted two SSDs (RAID1).&lt;/p&gt;
&lt;p&gt;This particular chassis version has support for SPGIO, which should help identifying which drive has failed. The IBM 1015 cards I use do support SGPIO.
Through the LSI megaraid CLI I have verified that SGPIO works, as you can use this tool as a drive locator. I'm not entirely sure how well SGPIO works with ZFS. &lt;/p&gt;
&lt;h3&gt;Power supply&lt;/h3&gt;
&lt;p&gt;I was using a Corsair 860i before, but it was unstable and died on me.&lt;/p&gt;
&lt;p&gt;The Seasonic Platinum 860 may seem like overkill for this system. However, I'm not using staggered spinup for the 24 drives. So the drives all spinup at once and this results in a peak power usage of 600+ watts. &lt;/p&gt;
&lt;p&gt;The PSU has a silent mode that causes the fan only to spin if the load reaches a certain threshold. Since the PSU fan also helps removing warm air from the chassis, I've disabled this feature, so the fan is spinning at all times.&lt;/p&gt;
&lt;h3&gt;Drive management&lt;/h3&gt;
&lt;p&gt;I've written a tool called &lt;a href="https://github.com/louwrentius/lsidrivemap"&gt;lsidrivemap&lt;/a&gt; that displays each drive
in an ASCII table that reflects the physical layout of the chassis.&lt;/p&gt;
&lt;p&gt;The data is based on the output of the LSI 'megacli' tool for my IBM 1015 controllers.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:~# lsidrivemap disk

| sdr | sds | sdt | sdq |
| sdu | sdv | sdx | sdw |
| sdi | sdl | sdp | sdm |
| sdj | sdk | sdn | sdo |
| sdb | sdc | sde | sdf |
| sda | sdd | sdh | sdg |
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This layout is 'hardcoded' for my chassis but the Python script can be easily
tailored for your own server, if you're interested.&lt;/p&gt;
&lt;p&gt;It can also show the temperature of the disk drives in the same table:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:~# lsidrivemap temp

| 36 | 39 | 40 | 38 |
| 36 | 36 | 37 | 36 |
| 35 | 38 | 36 | 36 |
| 35 | 37 | 36 | 35 |
| 35 | 36 | 36 | 35 |
| 34 | 35 | 36 | 35 |
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;These temperatures show that the top drives run a bit hotter than the other drives. An unverified explanation could be that the three 120mm fans are not in the center of the fan wall. They are skewed to the bottom of the wall, so they may favor the lower drive bays.&lt;/p&gt;
&lt;h3&gt;Filesystem (ZFS)&lt;/h3&gt;
&lt;p&gt;I'm using ZFS as the file system for the storage array. At this moment, there is no other file system that has the same features and stability as ZFS. BTRFS is not even finished.&lt;/p&gt;
&lt;p&gt;The number one design goal of ZFS was assuring data integrity. ZFS checksums all data and if you use RAIDZ or a mirror, it can even repair data. Even if it can't repair a file, it can at least tell you which files are corrupt.&lt;/p&gt;
&lt;p&gt;ZFS is not primarily focussed on performance, but to get the best performance possible, it makes heavy usage of RAM to cache both reads and writes. This is why ECC memory is so important. &lt;/p&gt;
&lt;p&gt;ZFS also implements RAID. So there is no need to use MDADM. My previous file server was running a single RAID 6 of 20 x 1TB drives. With this new system I've created a single pool with two RAIDZ2 VDEVs. &lt;/p&gt;
&lt;h3&gt;Capacity&lt;/h3&gt;
&lt;p&gt;Vendors still advertise the capacity of their hard drives in TB whereas the operating system works with TiB. So the 4 TB drives I use are in fact 3.64 TiB.&lt;/p&gt;
&lt;p&gt;The total raw storage capacity of the system is about 86 TiB.&lt;/p&gt;
&lt;p&gt;My zpool is the 'appropriate' number of disks (2^n + parity^)  in the VDEVs. So I have one 18 disk RAIDZ2 VDEV (2^4+2) and one 6 disk RAIDZ2 VDEV (2^2+2^) for a total of 24 drives.&lt;/p&gt;
&lt;p&gt;Different VDEV sizes in a single pool are often not recommended, but ZFS is very smart and cool: it load-balances the data across the VDEVs based on the size of the VDEV. I could verify this with zpool iostat -v 5 and witness this in real-time. The small VDEV got just a fraction of the data compared to the large VDEV.&lt;/p&gt;
&lt;p&gt;This choice leaves me with less capacity (71 TiB vs. 74 TiB for RAIDZ3) and also has a bit more risk to it, with the eighteen-disk RAIDZ2 VDEV. Regarding this latter risk, I've been running a twenty-disk MDADM RAID6 for the last 6 years and haven't seen any issues. That does not tell everything, but I'm comfortable with this risk.&lt;/p&gt;
&lt;p&gt;Originalyl I was planning on using RAIDZ3 and by using ashift=9 (512 byte sectors) I would recuperate most of the space lost to the non-optimal number of drives in the VDEV. So why did I change my mind? Because the performance of my ashift=9 pool on my 4K drives deteriorated so much that a resilver of a failed drive would take ages.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;Storage controllers&lt;/h3&gt;
&lt;p&gt;The IBM 1015 HBA's are reasonably priced and buying three of them, is often cheaper than buying just one HBA with a SAS expander. However, it may be cheaper to search for an HP SAS expander and use it with just one M1015 and save a PCIe slot.&lt;/p&gt;
&lt;p&gt;I have not flashed the controllers to 'IT mode', as most people do. They worked out-of-the-box as HBAs and although it may take a little bit longer to
boot the system, I decided not to go through the hassle.&lt;/p&gt;
&lt;p&gt;The main risk here is how the controller handles a drive if a sector is not properly read. It may disable the drive entirely, which is not necessary for ZFS and often not preferred.&lt;/p&gt;
&lt;h3&gt;Storage performance&lt;/h3&gt;
&lt;p&gt;With twenty-four drives in a chassis, it's interesting to see what kind of performance you can get from the system. &lt;/p&gt;
&lt;p&gt;Let's start with a twenty-four drive RAID 0. The drives I use have a sustained read/write speed of 160 MB/s so it should be possible to reach 3840 MB/s or 3.8 GB/s. That would be amazing. &lt;/p&gt;
&lt;p&gt;This is the performance of a RAID 0 (MDADM) of all twenty-four drives. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:/storage# dd if=/dev/zero of=test.bin bs=1M count=1000000
1048576000000 bytes (1.0 TB) copied, 397.325 s, 2.6 GB/s

root@nano:/storage# dd if=test.bin of=/dev/null bs=1M
1048576000000 bytes (1.0 TB) copied, 276.869 s, 3.8 GB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Dead on, you would say, but if you divide 1 TB with 276 seconds, it's more like 3.6 GB/s. I would say that's still quite close.&lt;/p&gt;
&lt;p&gt;This machine will be used as a file server and a bit of redundancy would be nice. So what happens if we run the same benchmark on a RAID6 of all drives?&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:/storage# dd if=/dev/zero of=test.bin bs=1M count=100000
104857600000 bytes (105 GB) copied, 66.3935 s, 1.6 GB/s

root@nano:/storage# dd if=test.bin of=/dev/null bs=1M
104857600000 bytes (105 GB) copied, 38.256 s, 2.7 GB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I'm quite pleased with these results, especially for a RAID6. However, RAID6 with twenty-four drives feels a bit risky. So since there is no support for a three-parity disk RAID in MDADM/Linux, I use ZFS.&lt;/p&gt;
&lt;p&gt;Sacrificing performance, I decided - as I mentioned earlier - to use ashift=9 on those 4K sector drives, because I gained about 5 TiB of storage in exchange. &lt;/p&gt;
&lt;p&gt;This is the performance of twenty-four drives in a RAIDZ3 VDEV with ashift=9.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:/storage# dd if=/dev/zero of=ashift9.bin bs=1M count=100000 
104857600000 bytes (105 GB) copied, 97.4231 s, 1.1 GB/s

root@nano:/storage# dd if=ashift9.bin of=/dev/null bs=1M
104857600000 bytes (105 GB) copied, 42.3805 s, 2.5 GB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Compared to the other results, write performance is way down, although not too bad.&lt;/p&gt;
&lt;p&gt;This is the write performance of the 18 disk RAIDZ2 + 6 disk RAIDZ2 zpool (ashift=12):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:/storage# dd if=/dev/zero of=test.bin bs=1M count=1000000 
1048576000000 bytes (1.0 TB) copied, 543.072 s, 1.9 GB/s

root@nano:/storage# dd if=test.bin of=/dev/null bs=1M 
1048576000000 bytes (1.0 TB) copied, 400.539 s, 2.6 GB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;As you may notice, the write performance is better than the ashift=9 or ashift=12 RAIDZ3 VDEV. &lt;/p&gt;
&lt;p&gt;In the end I chose to use the 18 disk RAIDZ2 + 6  disk RAIDZ2 setup because of the better performance and to adhere to the standards of ZFS.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;I have not benchmarked random I/O performance as it is not relevant for this system. And with ZFS, the random I/O performance of a VDEV is that of a single drive.&lt;/p&gt;
&lt;h3&gt;Boot drives&lt;/h3&gt;
&lt;p&gt;I'm using two Crucial M500 120GB SSD drives. They are configured in a RAID1 (MDADM) and I've installed Debian Wheezy on top of them. &lt;/p&gt;
&lt;p&gt;At first, I was planning on using a part of the capacity for caching purposes in combination with ZFS. However, there's no real need to do so. In hindsight I could also have used to very cheap 2.5" hard drives (simmilar to my older NAS), which would have cost less than a single M500.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update 2014-09-01:&lt;/strong&gt; I actually reinstalled Debian and kept about 50% free space on both M500s and put this space in a partition. These partitions have been provided to the ZFS pool as L2ARC cache. I did this because I could, but on the other hand, I wonder if I'm only really just wearing out my SSDs faster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update 2015-10-04:&lt;/strong&gt; I saw no reason why I would wear out my SSDs as a L2ARC so I removed them from my pool. There is absolutely no benefit in my case.&lt;/p&gt;
&lt;h3&gt;Networking (updated 2017-03-25)&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Current:&lt;/strong&gt;
I have installed a Mellanox MHGA28-XTC InfiniBand card. I'm using InfiniBand over IP so the InfiniBand card is effectively a faster network card. I have a point-to-point connection with another server, I do not have an InfiniBand switch. &lt;/p&gt;
&lt;p&gt;I get about 6.5 Gbit from this card, which is not even near the theoretical performance limit. However, this translate into a constant 750 MB/s file transfer speed over NFS, which is amazing.&lt;/p&gt;
&lt;p&gt;Using Linux bonding and the quad-port Ethernet adapter, I only got 400 MB/s and transfer speeds were fluctuating a lot. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Original:&lt;/strong&gt;
Maybe I will invest in 10Gbit ethernet or InfiniBand hardware in the future, but for now I settled on a quad-port gigabit adapter. With Linux bonding, I can still get &lt;a href="https://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html"&gt;450+ MB/s&lt;/a&gt; data transfers, which is sufficient for my needs.&lt;/p&gt;
&lt;p&gt;The quad-port card is in addition to the two on-board gigabit network cards. I use one of the on-board ports for client access. The four ports on the quad-port card are all in different VLANs and not accessible for client devices.&lt;/p&gt;
&lt;p&gt;The storage will be accessible over NFS and SMB. Clients will access storage over one of the on-board Gigabit LAN interfaces.&lt;/p&gt;
&lt;h3&gt;Keeping things cool and quiet&lt;/h3&gt;
&lt;p&gt;It's important to keep the drive temperature at acceptable levels and with 24  drives packet together, there is an increased risk of overheating. &lt;/p&gt;
&lt;p&gt;The chassis is well-equipped to keep the drives cool with three 120mm fans and two strong 80mm fans, all supporting PWM (pulse-width modulation).&lt;/p&gt;
&lt;p&gt;The problem is that by default, the BIOS runs the fans at a too low speed to keep the drives at a reasonable temperature. I'd like to keep the hottest drive at about forty degrees Celsius. But I also want to keep the noise at reasonable levels. &lt;/p&gt;
&lt;p&gt;I wrote a python script called &lt;a href="https://github.com/louwrentius/storagefancontrol"&gt;storagefancontrol&lt;/a&gt; that automatically adjusts the fan speed based on the temperature of the hottest drive. &lt;/p&gt;
&lt;h3&gt;UPS&lt;/h3&gt;
&lt;p&gt;I'm running a &lt;a href="https://louwrentius.com/hp-proliant-microserver-n40l-is-a-great-nas-or-router.html"&gt;HP N40L micro server&lt;/a&gt; as my firewall/router. My APC Back-UPS RS 1200 LCD (720 Watt) is connected with USB to this machine. I'm using apcupsd to monitor the UPS and shutdown servers if the battery runs low. &lt;/p&gt;
&lt;p&gt;All servers, including my new build, run apcupsd in network mode and talk to the N40L to learn if power is still OK.&lt;/p&gt;
&lt;h3&gt;Keeping power consumption reasonable&lt;/h3&gt;
&lt;p&gt;So these are the power usage numbers.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt; 96 Watt with disks in spin down.
176 Watt with disks spinning but idle.
253 Watt with disks writing.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Edit 2015-10-04:&lt;/strong&gt;
I do have an unresolved issue where the drives keep spinning up even with all services on the box killed, including Cron. So it's configured so that the drives are always spinning. &lt;strong&gt;/end edit&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;But the most important stat is that it's using 0 Watt if powered off. The system will be turned on only when necessary through wake-on-lan. It will be powered off most of the time, like when I'm at work or sleeping.&lt;/p&gt;
&lt;h3&gt;Cost&lt;/h3&gt;
&lt;p&gt;The system has cost me about €6000. All costs below are in Euro and include taxes (21%).&lt;/p&gt;
&lt;table border=0 cellpadding=0 cellspacing=0 width=447 style='border-collapse:
 collapse;table-layout:fixed;width:447pt'&gt;
 &lt;col width=89 style='mso-width-source:userset;mso-width-alt:3797;width:89pt'&gt;
 &lt;col width=234 style='mso-width-source:userset;mso-width-alt:9984;width:234pt'&gt;
 &lt;col width=33 style='mso-width-source:userset;mso-width-alt:1408;width:33pt'&gt;
 &lt;col width=50 style='mso-width-source:userset;mso-width-alt:2133;width:50pt'&gt;
 &lt;col width=41 style='mso-width-source:userset;mso-width-alt:1749;width:41pt'&gt;
 &lt;tr class=xl65 height=35 style='mso-height-source:userset;height:35.0pt'&gt;
  &lt;td height=35 class=xl67 width=89 style='height:35.0pt;width:89pt'&gt;Description&lt;/td&gt;
  &lt;td class=xl67 width=234 style='width:234pt'&gt;Product&lt;/td&gt;
  &lt;td class=xl68 width=33 style='width:33pt'&gt;Price&lt;/td&gt;
  &lt;td class=xl68 width=50 style='width:50pt'&gt;Amount&lt;/td&gt;
  &lt;td class=xl68 width=41 style='width:41pt'&gt;Total&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;Chassis&lt;/td&gt;
  &lt;td&gt;Ri-vier 4U 24bay storage chassis RV-4324-01A&lt;/td&gt;
  &lt;td align=right&gt;554&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;554&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;CPU&lt;/td&gt;
  &lt;td&gt;Intel Xeon E3-1230V2&lt;/td&gt;
  &lt;td align=right&gt;197&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;197&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;Mobo&lt;/td&gt;
  &lt;td&gt;SuperMicro X9SCM-F&lt;/td&gt;
  &lt;td align=right&gt;157&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;157&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;RAM&lt;/td&gt;
  &lt;td&gt;Kingston DDR3 ECC KVR1333D3E9SK2/16G&lt;span
  style="mso-spacerun:yes"&gt;&amp;nbsp;&lt;/span&gt;&lt;/td&gt;
  &lt;td align=right&gt;152&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;152&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;PSU&lt;/td&gt;
  &lt;td&gt;AX860i 80Plus Platinum&lt;/td&gt;
  &lt;td align=right&gt;175&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;175&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;Network Card&lt;/td&gt;
  &lt;td&gt;NC364T PCI Express Quad Port Gigabit&lt;/td&gt;
  &lt;td align=right&gt;145&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;145&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;HBA Controller&lt;/td&gt;
  &lt;td&gt;IBM SERVERAID M1015&lt;span style="mso-spacerun:yes"&gt;&amp;nbsp;&lt;/span&gt;&lt;/td&gt;
  &lt;td align=right&gt;118&lt;/td&gt;
  &lt;td align=right&gt;3&lt;/td&gt;
  &lt;td align=right&gt;354&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;SSDs&lt;/td&gt;
  &lt;td&gt;Crucial M500 120GB&lt;/td&gt;
  &lt;td align=right&gt;62&lt;/td&gt;
  &lt;td align=right&gt;2&lt;/td&gt;
  &lt;td align=right&gt;124&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;Fan&lt;span
  style="mso-spacerun:yes"&gt;&amp;nbsp;&lt;/span&gt;&lt;/td&gt;
  &lt;td&gt;Zalman FB123 Casefan Bracket + 92mm Fan&lt;/td&gt;
  &lt;td align=right&gt;7&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;7&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;Hard Drive&lt;/td&gt;
  &lt;td&gt;Hitachi 3.5 4TB 7200RPM (0S03356)&lt;/td&gt;
  &lt;td align=right&gt;166&lt;/td&gt;
  &lt;td align=right&gt;24&lt;/td&gt;
  &lt;td align=right&gt;3984&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;SAS Cables&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td align=right&gt;25&lt;/td&gt;
  &lt;td align=right&gt;6&lt;/td&gt;
  &lt;td align=right&gt;150&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;Fan cables&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td align=right&gt;6&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;6&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;Sata-to-Molex&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td align=right&gt;3,5&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;3,5&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 style='height:15.0pt'&gt;Molex splitter&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td align=right&gt;3&lt;/td&gt;
  &lt;td align=right&gt;1&lt;/td&gt;
  &lt;td align=right&gt;3&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr height=15 style='height:15.0pt'&gt;
  &lt;td height=15 colspan=4 style='height:15.0pt;mso-ignore:colspan'&gt;&lt;/td&gt;
  &lt;td class=xl66 align=right&gt;6012&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/table&gt;

&lt;h3&gt;Closing words&lt;/h3&gt;
&lt;p&gt;If you have any questions or remarks about what could have been done differently feel free to leave a comment, I appreciate it.&lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>ZFS: Performance and capacity impact of ashift=9 on 4K sector drives</title><link href="https://louwrentius.com/zfs-performance-and-capacity-impact-of-ashift9-on-4k-sector-drives.html" rel="alternate"></link><published>2014-07-31T00:00:00+02:00</published><updated>2014-07-31T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-07-31:/zfs-performance-and-capacity-impact-of-ashift9-on-4k-sector-drives.html</id><summary type="html">&lt;hr /&gt;
&lt;p&gt;&lt;strong&gt;Update 2014-8-23&lt;/strong&gt;: I was testing with ashift for my new NAS. The ashift=9 write performance deteriorated from 1.1 GB/s to 830 MB/s with just 16 TB of data on the pool. Also I noticed that resilvering was very slow. This is why I decided to abandon …&lt;/p&gt;</summary><content type="html">&lt;hr /&gt;
&lt;p&gt;&lt;strong&gt;Update 2014-8-23&lt;/strong&gt;: I was testing with ashift for my new NAS. The ashift=9 write performance deteriorated from 1.1 GB/s to 830 MB/s with just 16 TB of data on the pool. Also I noticed that resilvering was very slow. This is why I decided to abandon my 24 drive RAIDZ3 configuration.&lt;/p&gt;
&lt;p&gt;I'm aware that drives are faster at the outside of the platter and slower on the inside, but the performance deteriorated so dramatically that I did not wanted to continue further.&lt;/p&gt;
&lt;p&gt;My final setup will be a RAIDZ2 18 drive VDEV + RAIDZ2 6 drive VDEV which will give me 'only' 71 TiB of storage, but read performance is 2.6 GB/s and write performance is excellent at 1.9 GB/s. I've written about 40+ TiB to the array and after those 40 TiB, write performance was about 1.7 GB/s, so still very good and what I would expect as drives fill up.&lt;/p&gt;
&lt;p&gt;So actually, based on these results, I have learned not to deviate from the ZFS best practices too much. Use ashift=12 and put drives in VDEVS that adhere to the 2^n+parity rule. &lt;/p&gt;
&lt;p&gt;The uneven VDEVs (18 disk vs. 6 disks) are not according to best practice but ZFS is smart: it distributes data across the VDEVs based on their size. So they fill up equally. &lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Choosing between ashift=9 and ashift=12 for 4K sector drives is not always a clear cut case. You have to choose between raw performance or storage capacity.&lt;/p&gt;
&lt;p&gt;My testplatform is Debian Wheezy with ZFS on Linux. I'm using a system with 24 x 4 TB drives in a RAIDZ3. The drives have a native sector size of 4K, and the array is formatted with ashift=12.&lt;/p&gt;
&lt;p&gt;First we create the array like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zpool create storage -o ashift=12 raidz3 /dev/sd[abcdefghijklmnopqrstuvwx]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Note: NEVER use /dev/sd? drive names for an array, this is just for testing, always use /dev/disk/by-id/ names. &lt;/p&gt;
&lt;p&gt;Then we run a simple sequential transfer benchmark with dd:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:/storage# dd if=/dev/zero of=ashift12.bin bs=1M count=100000 
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 66.4922 s, 1.6 GB/s
root@nano:/storage# dd if=ashift12.bin of=/dev/null bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 42.0371 s, 2.5 GB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is quite impressive. With these speeds, you can saturate 10Gbe ethernet.
But how much storage space do we get?&lt;/p&gt;
&lt;p&gt;df -h:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Filesystem                            Size  Used Avail Use% Mounted on
storage                                69T  512K   69T   1% /storage
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;zfs list:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;NAME      USED  AVAIL  REFER  MOUNTPOINT
storage  1.66M  68.4T   435K  /storage
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Only 68.4 TiB of storage? That's not good. There should be 24 drives minus 3 for parity is 21 x 3.6 TiB = 75 TiB of storage. &lt;/p&gt;
&lt;p&gt;So the performance is great, but somehow, we lost about 6 TiB of storage, more than a whole drive.&lt;/p&gt;
&lt;p&gt;So what happens if you create the same array with ashift=9?&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zpool create storage -o ashift=9 raidz3 /dev/sd[abcdefghijklmnopqrstuvwx]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;These are the benchmarks:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:/storage# dd if=/dev/zero of=ashift9.bin bs=1M count=100000 
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 97.4231 s, 1.1 GB/s
root@nano:/storage# dd if=ashift9.bin of=/dev/null bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 42.3805 s, 2.5 GB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So we lose about a third of our write performance, but the read performance is not affected, probably by read-ahead caching but I'm not sure. &lt;/p&gt;
&lt;p&gt;With ashift=9, we do lose some write performance, but we can still saturate 10Gbe.&lt;/p&gt;
&lt;p&gt;Now look what happens to the available storage capacity:&lt;/p&gt;
&lt;p&gt;df -h:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Filesystem                         Size  Used Avail Use% Mounted on
storage                             74T   98G   74T   1% /storage
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;zfs list:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;NAME      USED  AVAIL  REFER  MOUNTPOINT
storage   271K  73.9T  89.8K  /storage
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now we have a capacity of 74 TiB, so we just gained 5 TiB with ashift=9 over ashift=12, at the cost of some write performance. &lt;/p&gt;
&lt;p&gt;So if you really care about sequential write performance, ashift=12 is the better option. If storage capacity is more important, ashift=9 seems to be the best solution for 4K drives.&lt;/p&gt;
&lt;p&gt;The performance of ashift=9 on 4K drives is always described as 'horrible' but I think it's best to run your own benchmarks and decide for yourself.&lt;/p&gt;
&lt;p&gt;Caveat: I'm quite sure about the benchmark performance. I'm not 100% sure how reliable the reported free space is according to df -h or zfs list.&lt;/p&gt;
&lt;p&gt;Edit: I have added a bit of my own opinion on the results.&lt;/p&gt;</content><category term="Storage"></category><category term="ZFS"></category><category term="Linux"></category></entry><entry><title>Achieving 2.3 GB/s with 16 x 4 TB drives</title><link href="https://louwrentius.com/achieving-23-gbs-with-16-x-4-tb-drives.html" rel="alternate"></link><published>2014-07-12T12:00:00+02:00</published><updated>2014-07-12T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-07-12:/achieving-23-gbs-with-16-x-4-tb-drives.html</id><summary type="html">&lt;p&gt;I'm in the process of building a new storage server to replace my &lt;a href="https://louwrentius.com/20-disk-18-tb-raid-6-storage-based-on-debian-linux.html"&gt;18 TB NAS&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The server is almost finished, it's now down to adding disk drives. I'm using the &lt;a href="http://www.hgst.com/tech/techlib.nsf/techdocs/A9AF74F697524DFD882577F9000CF8BD/$file/Desktop_IDK_ds.pdf"&gt;HGST 4 TB 7200 RPM&lt;/a&gt; drive for this build (SKU 0S03356) &lt;a href="http://www.storagereview.com/hitachi_deskstar_7k4000_review"&gt;(review)&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;I have not bought all drives at …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I'm in the process of building a new storage server to replace my &lt;a href="https://louwrentius.com/20-disk-18-tb-raid-6-storage-based-on-debian-linux.html"&gt;18 TB NAS&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The server is almost finished, it's now down to adding disk drives. I'm using the &lt;a href="http://www.hgst.com/tech/techlib.nsf/techdocs/A9AF74F697524DFD882577F9000CF8BD/$file/Desktop_IDK_ds.pdf"&gt;HGST 4 TB 7200 RPM&lt;/a&gt; drive for this build (SKU 0S03356) &lt;a href="http://www.storagereview.com/hitachi_deskstar_7k4000_review"&gt;(review)&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;I have not bought all drives at once, but slowly adding them in smaller quantities. I just don't want to feel too much pain in my wallet at once I guess. &lt;/p&gt;
&lt;p&gt;According to my own tests, this drive has a read/write throughput of 160 MB/s, which is in tune with it's specification.&lt;/p&gt;
&lt;p&gt;So the theoretical performance of a RAID 0 with 16 drives x 160 MB/s = 2560 MB/s. That's over 2.5 gigabytes per second. &lt;/p&gt;
&lt;p&gt;This is the actual real-life performance I was able to achieve. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@nano:/storage# dd if=pureawesomeness.dd of=/dev/null bs=1M
1000000+0 records in
1000000+0 records out
1048576000000 bytes (1.0 TB) copied, 453.155 s, 2.3 GB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Personally, 2.3 GB/s is not too shabby in my opinion. Please note that I used a test file of one terabyte, so the 16 GB of RAM my server has, doesn't skew the
result.&lt;/p&gt;
&lt;p&gt;This result is very nice, but in practice almost useless. I can saturate dual 10 Gbit NICs with this system, but I don't have that kind of equipment or any other device that could handle such performance.&lt;/p&gt;
&lt;p&gt;But I think it's amazing anyway.&lt;/p&gt;
&lt;p&gt;I'm quite curious how the final 24 drive array will perform in a RAID 0.&lt;/p&gt;</content><category term="Storage"></category><category term="Storage"></category></entry><entry><title>Affordable server with server-grade hardware part II</title><link href="https://louwrentius.com/affordable-server-with-server-grade-hardware-part-ii.html" rel="alternate"></link><published>2014-06-20T12:00:00+02:00</published><updated>2014-06-20T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-06-20:/affordable-server-with-server-grade-hardware-part-ii.html</id><summary type="html">&lt;p&gt;If you want to build a home server, it may be advised to actually use server-grade components. I documented the reasons for choosing server-grade hardware already in an &lt;a href="https://louwrentius.com/an-affordable-server-platform-based-on-server-grade-hardware.html"&gt;earlier post&lt;/a&gt; on this topic.  &lt;/p&gt;
&lt;p&gt;It is recommended to read the old post first. In this new post, I only show new …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you want to build a home server, it may be advised to actually use server-grade components. I documented the reasons for choosing server-grade hardware already in an &lt;a href="https://louwrentius.com/an-affordable-server-platform-based-on-server-grade-hardware.html"&gt;earlier post&lt;/a&gt; on this topic.  &lt;/p&gt;
&lt;p&gt;It is recommended to read the old post first. In this new post, I only show new hardware that could also be chosen as a more modern hardware option. &lt;/p&gt;
&lt;p&gt;My original post dates back to December 2013 and centers around the popular X9SCM-F which is based on the LGA 1155 socket. Please note that the X9SCM-F / LGA 1155 based solution may be cheaper if you want the Xeon processor. &lt;/p&gt;
&lt;p&gt;So I'd like to introduce two Supermicro motherboards that may be of interest.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.supermicro.com/products/motherboard/Xeon/C220/X10SLL-F.cfm"&gt;Supermicro X10SLL-F&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Supermicro X10SLL-F" src="https://www.supermicro.com/a_images/products/X10/C220/X10SLL-F_spec.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Some key features are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2 x Gigabit NIC on-board&lt;/li&gt;
&lt;li&gt;6 onboard SATA ports&lt;/li&gt;
&lt;li&gt;3 x PCIe (2 x 8x + 1 x 4x)&lt;/li&gt;
&lt;li&gt;Costs $169 or €160&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This board is one of the cheapest Supermicro boards you can get and it has 3 x PCI-e, which may be of interest if you need to install extra HBA's or RAID cards, SAS expanders and/or network controllers. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.supermicro.com/products/motherboard/Xeon/C220/X10SL7-F.cfm"&gt;Supermicro X10SL7-F&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="Supermicro X10SL7-F" src="https://www.supermicro.com/a_images/products/X10/C220/X10SL7-F_spec.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;This board is about $80 or €90 more expensive than the X10SLL-F but in return, you get eight extra SAS/SATA ports, for a total of 14 SATA ports. With 4 TB drives, this would give you 56 TB of raw storage capacity. This motherboard provides a cheaper solution than an add-on HBA card, which would occupy a PCIe slot. Hoever, the's a caveat: this board has 'only' two PCIe slots. But there's still room for an additional quad-port or 10 Gbe NIC and an extra HBA if required.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;2 x Gigabit NIC on-board&lt;/li&gt;
&lt;li&gt;6 onboard SATA ports&lt;/li&gt;
&lt;li&gt;8 onboard SAS/SATA ports via LSI 2308 chip&lt;/li&gt;
&lt;li&gt;2 x PCIe (8x and 4x)&lt;/li&gt;
&lt;li&gt;Costs $242 or €250&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;em&gt;Overview of CPU's&lt;/em&gt;&lt;/p&gt;
&lt;table&gt;
&lt;tr&gt;&lt;td&gt;CPU&lt;/td&gt;&lt;td&gt;Passmark score&lt;/td&gt;&lt;td&gt;Price in Euro&lt;/td&gt;&lt;td&gt;Price in Dollars&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Intel Pentium G3420 @ 3.20GHz&lt;/td&gt;&lt;td&gt;3459&lt;/td&gt;&lt;td&gt;55 Euro&lt;/td&gt;&lt;td&gt;74 Dollar&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Intel Core i3-4130 @ 3.40GHz&lt;/td&gt;&lt;td&gt;4827&lt;/td&gt;&lt;td&gt;94 Euro&lt;/td&gt;&lt;td&gt;124 Dollar&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Intel Xeon E3-1230 V3 @ 3.30GHz&lt;/td&gt;&lt;td&gt;9459&lt;/td&gt;&lt;td&gt;216Euro&lt;/td&gt;&lt;td&gt;279 Dollar&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;

&lt;ul&gt;
&lt;li&gt;Dollars are from Newegg, Euro's are from Tweakers.net.&lt;/li&gt;
&lt;li&gt;Euros are including taxes.&lt;/li&gt;
&lt;/ul&gt;</content><category term="Hardware"></category><category term="Supermicro"></category><category term="Intel"></category><category term="ECC"></category></entry><entry><title>How to resolve extreme memory usage on Windows 2008 R2-based file servers</title><link href="https://louwrentius.com/how-to-resolve-extreme-memory-usage-on-windows-2008-r2-based-file-servers.html" rel="alternate"></link><published>2014-06-15T12:00:00+02:00</published><updated>2014-06-15T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-06-15:/how-to-resolve-extreme-memory-usage-on-windows-2008-r2-based-file-servers.html</id><summary type="html">&lt;p&gt;I'm responsible for a file server with about 5 terrabytes of data. The file server is based on Windows 2008 R2. I've noticed extreme memory usage on the server. After a reboot, it slowly builds up until almost all RAM memory is consumed.&lt;/p&gt;
&lt;p&gt;So I googled around and found &lt;a href="http://wasthatsohard.wordpress.com/2011/03/01/high-memory-usage-windows-server-2008-r2-file-server/"&gt;this …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;I'm responsible for a file server with about 5 terrabytes of data. The file server is based on Windows 2008 R2. I've noticed extreme memory usage on the server. After a reboot, it slowly builds up until almost all RAM memory is consumed.&lt;/p&gt;
&lt;p&gt;So I googled around and found &lt;a href="http://wasthatsohard.wordpress.com/2011/03/01/high-memory-usage-windows-server-2008-r2-file-server/"&gt;this post&lt;/a&gt; and it turned out I had the same exact issue.&lt;/p&gt;
&lt;p&gt;I've confirmed with the tool 'RAMmap' that NTFS metadata is the issue. Microsoft also created a &lt;a href="http://blogs.technet.com/b/mspfe/archive/2012/12/06/lots-of-ram-but-no-available-memory.aspx"&gt;blog post&lt;/a&gt; about this.&lt;/p&gt;
&lt;p&gt;The author of the first article resolved the issue by adding more RAM memory. But with 16 GB already assigned, I was not to happy to add more memory to the virtual file server, eating away on the RAM resources of our virtualisation platform.&lt;/p&gt;
&lt;p&gt;I could never find a root cause of the issue. In that case, you need to obtain the 'Microsoft Windows Dynamic Cache Service'. This application allows you to configure how large the medata caching may grow. &lt;/p&gt;
&lt;p&gt;Please note that this services is not a next-next-finish installation. Follow the included Word document with instructions carefully and configure a sane memory setting for your server. I limited the cache to half the RAM available to the server and this works out well. &lt;/p&gt;</content><category term="Storage"></category><category term="Windows"></category><category term="file server"></category></entry><entry><title>My experiences with DFS replication on Windows 2008 R2</title><link href="https://louwrentius.com/my-experiences-with-dfs-replication-on-windows-2008-r2.html" rel="alternate"></link><published>2014-06-15T12:00:00+02:00</published><updated>2014-06-15T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-06-15:/my-experiences-with-dfs-replication-on-windows-2008-r2.html</id><summary type="html">&lt;p&gt;If you are considering implementing DFS replication, &lt;a href="http://blogs.technet.com/b/storageserver/archive/2013/10/08/windows-storage-server-2012-r2-improved-dfs-replication.aspx"&gt;consider using Windows 2012 R2&lt;/a&gt; because DFS replication has been massively improved. It supports larger data sets and performance has dramatically been improved over Windows 2008 R2. &lt;/p&gt;
&lt;p&gt;I've implemented DFS replication to keep two file servers synchronised. Click &lt;a href="http://blogs.technet.com/b/josebda/archive/2009/03/10/the-basics-of-the-windows-server-2008-distributed-file-system-dfs.aspx"&gt;here&lt;/a&gt; if or &lt;a href="http://technet.microsoft.com/en-us/library/cc732863(v=ws.10).aspx"&gt;there&lt;/a&gt; you …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you are considering implementing DFS replication, &lt;a href="http://blogs.technet.com/b/storageserver/archive/2013/10/08/windows-storage-server-2012-r2-improved-dfs-replication.aspx"&gt;consider using Windows 2012 R2&lt;/a&gt; because DFS replication has been massively improved. It supports larger data sets and performance has dramatically been improved over Windows 2008 R2. &lt;/p&gt;
&lt;p&gt;I've implemented DFS replication to keep two file servers synchronised. Click &lt;a href="http://blogs.technet.com/b/josebda/archive/2009/03/10/the-basics-of-the-windows-server-2008-distributed-file-system-dfs.aspx"&gt;here&lt;/a&gt; if or &lt;a href="http://technet.microsoft.com/en-us/library/cc732863(v=ws.10).aspx"&gt;there&lt;/a&gt; you want to learn more about DFS itself. &lt;/p&gt;
&lt;p&gt;With DFS, I wanted to create a high-available file server service, based on two file servers, each with their own physical storage. DFS replication makes sure that both file servers are kept in sync. &lt;/p&gt;
&lt;p&gt;If you setup DFS, you need to copy all the data from the original server to the secondary server. This is called &lt;a href="http://technet.microsoft.com/en-us/library/dn495044.aspx"&gt;seeding&lt;/a&gt; and I've used robocopy as recommended by Microsoft in the linked article. &lt;/p&gt;
&lt;p&gt;Seeding is not mandatory. You can just start with an empty folder on the secondary server and just have DFS replicate all files. I've experienced myself that DFS replication can be extremely slow on Windows 2008 R2. &lt;/p&gt;
&lt;p&gt;Once all files are seeded and DFS is configured, the initial replication can still takes days. Replication times are based on:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;the number of files&lt;/li&gt;
&lt;li&gt;the size of the data&lt;/li&gt;
&lt;li&gt;the performance of the disk subsystems of both source and destination&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Note: windows 2012 R2 improves DFS replication dramatically, only more reason to upgrade your file servers to 2012 R2 or higher.&lt;/p&gt;
&lt;p&gt;If you seed the files, DFS will not transfer files if they are identical, thus this saves bandwidth and time. DFS checks if files differ based on their hash. So even if you seed all data, the initial replication can take a while. &lt;/p&gt;
&lt;p&gt;On our virtualised platform, the initial replication of 2.5 GB of data consisting of about five million files took about a full week. To me, that is not a very desirable outcome, but once the initial replication is done, there is no performance issue and all changes are nearly instantly replicated to the secondary server. &lt;/p&gt;
&lt;p&gt;For the particular configuration I've setup, the performance storage subsystem could contribute to the slow initial replication. &lt;/p&gt;
&lt;p&gt;To speed up the replication process, it's important that you install the &lt;a href="http://support.microsoft.com/kb/2680906"&gt;latest version of robocopy&lt;/a&gt; for Windows 2008 R2 on both systems. There is a bug in older versions of robocopy that do not properly set permissions on files. This results in file hash mismatches, causing DFS to replicate all files, nullifying the benefit of seeding. &lt;/p&gt;
&lt;p&gt;Hotfixes for &lt;a href="http://support.microsoft.com/kb/968429/en-us"&gt;Windows 2008 R2&lt;/a&gt;: 
Hotfixes for &lt;a href="http://support.microsoft.com/kb/2951262"&gt;Windows 2012 R2&lt;/a&gt;: &lt;/p&gt;
&lt;p&gt;To verify if a file on both servers has identical hashes, follow &lt;a href="http://technet.microsoft.com/en-us/library/dn495042.aspx"&gt;these instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you've checked a few files and assured that the hashes are identical, it's ok to configure DFS replication. If you see a lot of Event ID 4412 messages in the DFS Replication event log, there probably is an issue with the file hashes.&lt;/p&gt;</content><category term="Storage"></category><category term="Windows"></category><category term="DFS"></category><category term="Replication"></category><category term="DFS-replication"></category></entry><entry><title>How traffic shaping can dramatically improve internet responsiveness</title><link href="https://louwrentius.com/how-traffic-shaping-can-dramatically-improve-internet-responsiveness.html" rel="alternate"></link><published>2014-03-08T12:00:00+01:00</published><updated>2014-03-08T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-03-08:/how-traffic-shaping-can-dramatically-improve-internet-responsiveness.html</id><summary type="html">&lt;p&gt;At work, access to the internet is provided by a 10 Mbit down / 1 Mbit up ADSL-connection. As we are a mid-size company, bandwidth is clearly a severe constraint. But it was not our biggest problem. Even simple web-browsing was very slow.&lt;/p&gt;
&lt;p&gt;As I was setting up a monitoring environment …&lt;/p&gt;</summary><content type="html">&lt;p&gt;At work, access to the internet is provided by a 10 Mbit down / 1 Mbit up ADSL-connection. As we are a mid-size company, bandwidth is clearly a severe constraint. But it was not our biggest problem. Even simple web-browsing was very slow.&lt;/p&gt;
&lt;p&gt;As I was setting up a monitoring environment based on Nagios and pnp4nagios, I started to graph the &lt;em&gt;latency&lt;/em&gt; of our internet connection, just to prove that we have a problem.&lt;/p&gt;
&lt;p&gt;Boy did we get that proof:&lt;/p&gt;
&lt;p&gt;&lt;img alt="bad latency" src="https://louwrentius.com/static/images/internetbadlatency.png" /&gt;&lt;/p&gt;
&lt;p&gt;Just look at the y-axis, which scale is in milliseconds. For most of the day, the average latency is 175 ms, with some high spikes. Just browsing the web was a pain during times of high-latency, which was clearly almost all of the time.&lt;/p&gt;
&lt;p&gt;I became so fed up with our slow internet access that I decided to take matters in my own hands and resolve the low latency issue. The solution? Traffic shaping. &lt;/p&gt;
&lt;p&gt;I learned that as ADSL-connections are saturated, especially their upload capacity, you will experience high latency and packet loss. So the trick is to &lt;em&gt;never&lt;/em&gt; saturate the connection. &lt;/p&gt;
&lt;p&gt;I grabbed a Linux box with two network interfaces and placed it between our internet router and our firewall in bridge mode. &lt;/p&gt;
&lt;p&gt;For actual traffic shaping I used &lt;a href="http://lartc.org/wondershaper/"&gt;wondershaper&lt;/a&gt; which is part of Debian or Ubuntu.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get install wondershaper
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The wondershaper script is extremely simple, it's specifically build to resolve the problem we face with our ADSL connection. It not only prioritises traffic, it allows you to limit bandwidth usage and thus prevent you from saturating the connection.&lt;/p&gt;
&lt;p&gt;This simple example limits bandwidth a bit below full capacity, which dramatically improved latency. &lt;/p&gt;
&lt;p&gt;Syntax:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;wondershaper &amp;lt;interface&amp;gt; &amp;lt;rx&amp;gt; &amp;lt;tx&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;wondershaper eth1 9500 700
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;As you can see, latency improved dramatically:&lt;/p&gt;
&lt;p&gt;&lt;img alt="good latency" src="https://louwrentius.com/static/images/internetgoodlatency.png" /&gt;&lt;/p&gt;
&lt;p&gt;Again, look at the y-axis. We went from an average latency of 175 ms to an average of 35 ms. That's quite an improvement. &lt;/p&gt;
&lt;p&gt;Can you spot on which day I implemented traffic shaping?&lt;/p&gt;
&lt;p&gt;&lt;img alt="week latency" src="https://louwrentius.com/static/images/internetweeklatency.png" /&gt;&lt;/p&gt;
&lt;p&gt;At the time of writing this blog post, the company is working on fiber internet access, resolving our internet woes, but it will take quite some time before that will be installed, so this is a nice intermediate solution.&lt;/p&gt;</content><category term="Networking"></category><category term="traffic shaping"></category></entry><entry><title>Monitoring HP MSA P2000 G3 I/O latency with Nagios</title><link href="https://louwrentius.com/monitoring-hp-msa-p2000-g3-io-latency-with-nagios.html" rel="alternate"></link><published>2014-02-04T12:00:00+01:00</published><updated>2014-02-04T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-02-04:/monitoring-hp-msa-p2000-g3-io-latency-with-nagios.html</id><summary type="html">&lt;p&gt;At work we have a couple of HP MSA P2000 G3 SANs. These are entry-level SANs that still seem to have almost all features you might want from a SAN, except for official SSD-support. &lt;/p&gt;
&lt;p&gt;It seems that the new &lt;a href="http://h18006.www1.hp.com/storage/pdfs/4AA4-6608ENW.pdf"&gt;MSA 2040&lt;/a&gt; adds support for SSDs and also provides 4 GB …&lt;/p&gt;</summary><content type="html">&lt;p&gt;At work we have a couple of HP MSA P2000 G3 SANs. These are entry-level SANs that still seem to have almost all features you might want from a SAN, except for official SSD-support. &lt;/p&gt;
&lt;p&gt;It seems that the new &lt;a href="http://h18006.www1.hp.com/storage/pdfs/4AA4-6608ENW.pdf"&gt;MSA 2040&lt;/a&gt; adds support for SSDs and also provides 4 GB cache per controller instead of the somewhat meager 2GB of the P2000. &lt;/p&gt;
&lt;p&gt;Anyway, a very nice feature of the MSA P2000 G3 is the fact that the management interface also provides a well-documented API that allows you to collect detailed stats on subjects like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Overall enclosure status&lt;/li&gt;
&lt;li&gt;Reports on failed drives &lt;/li&gt;
&lt;li&gt;Controller CPU usage&lt;/li&gt;
&lt;li&gt;IOPs per controller&lt;/li&gt;
&lt;li&gt;IOPs per vdisk&lt;/li&gt;
&lt;li&gt;IOPs per disk&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://www.toms-blog.com/post/nagios-hp-msa-p2000-status-and-performance-monitor/"&gt;Thomas Weaver has written a Nagios plugin&lt;/a&gt; that does that: it collects this information and in turn you can graph it with &lt;a href="http://docs.pnp4nagios.org/start"&gt;pnp4nagios&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;In more recent firmware updates HP has added support to monitor read and write I/O latency per vdisk. Latency is an important indicator for application-level performance so I was quite happy with that. &lt;/p&gt;
&lt;p&gt;As the plugin by Thomas did not support reading these parameters yet, I spend some time implementing this check and submitted this new version of check_p2000_api.php back to Thomas. &lt;/p&gt;
&lt;p&gt;Read Latency of a RAID 6 array&lt;/p&gt;
&lt;p&gt;&lt;img alt="readlatency" src="https://louwrentius.com/static/images/msareadlatency.png" /&gt;&lt;/p&gt;
&lt;p&gt;Write Latency of a RAID 6 array&lt;/p&gt;
&lt;p&gt;&lt;img alt="readlatency" src="https://louwrentius.com/static/images/msawritelatency.png" /&gt;&lt;/p&gt;
&lt;p&gt;You will notice that the write latency of this disk array is very high at times, which seem to indicate that this vdisk is taxed too much with too many I/O-requests. &lt;/p&gt;
&lt;p&gt;I'd like to thank Thomas Weaver for writing this plugin, I think it's very useful.&lt;/p&gt;</content><category term="Storage"></category><category term="Storage"></category><category term="Nagios"></category></entry><entry><title>Creating a basic ZFS file system on Linux</title><link href="https://louwrentius.com/creating-a-basic-zfs-file-system-on-linux.html" rel="alternate"></link><published>2014-02-01T12:00:00+01:00</published><updated>2014-02-01T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-02-01:/creating-a-basic-zfs-file-system-on-linux.html</id><summary type="html">&lt;p&gt;Here are some notes on creating a basic ZFS file system on Linux, using &lt;a href="http://zfsonlinux.org"&gt;ZFS on Linux&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I'm documenting the scenario where I just want to create a file system that can tollerate at least a single drive failure and can be shared over NFS.&lt;/p&gt;
&lt;h3&gt;Identify the drives you want …&lt;/h3&gt;</summary><content type="html">&lt;p&gt;Here are some notes on creating a basic ZFS file system on Linux, using &lt;a href="http://zfsonlinux.org"&gt;ZFS on Linux&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I'm documenting the scenario where I just want to create a file system that can tollerate at least a single drive failure and can be shared over NFS.&lt;/p&gt;
&lt;h3&gt;Identify the drives you want to use for the ZFS pool&lt;/h3&gt;
&lt;p&gt;The ZFS on Linux project &lt;a href="http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool"&gt;advices&lt;/a&gt; not to use plain /dev/sdx (/dev/sda, etc.) devices but to use /dev/disk/by-id/ or /dev/disk/by-path device names. &lt;/p&gt;
&lt;p&gt;Device names for storage devices are not fixed, so /dev/sdx devices may not always point to the same disk device. I've been bitten by this when first experimenting with ZFS, because I did not follow this advice and then could not access my zpool after a reboot because I removed a drive from the system. &lt;/p&gt;
&lt;p&gt;So you should pick the appropriate device from the /dev/disk/by-[id|path] folder. However, it's often difficult to determine which device in those folders corresponds to an actual disk drive.&lt;/p&gt;
&lt;p&gt;So I wrote a simple tool called &lt;a href="https://github.com/louwrentius/showtools"&gt;showdisks&lt;/a&gt; which helps you identify which identifiers you need to use to create your ZFS pool.&lt;/p&gt;
&lt;p&gt;&lt;img alt="diskbypath" src="https://louwrentius.com/static/images/diskbypath.png" /&gt;&lt;/p&gt;
&lt;p&gt;You can install showdisks yourself by cloning the project:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;git clone https://github.com/louwrentius/showtools.git
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And then just use showdisks like&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;./showdisks -sp  (-s (size) and -p (by-path) )
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;For this example, I'd like to use all the 500 GB disk drives for a six-drive RAIDZ1 vdev.
Based on the information from showdisks, this is the command to create the vdev:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zfs create tank raidz1 pci-0000:03:00.0-scsi-0:0:21:0 pci-0000:03:00.0-scsi-0:0:19:0 pci-0000:02:00.0-scsi-0:0:9:0 pci-0000:02:00.0-scsi-0:0:11:0 pci-0000:03:00.0-scsi-0:0:22:0 pci-0000:03:00.0-scsi-0:0:18:0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The 'tank' name can be anything you want, it's just a name for the pool.&lt;/p&gt;
&lt;p&gt;Please note that with newer bigger disk drives, you should test if the ashift=12 option gives you better performance. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zfs create -o ashift=12 tank raidz1 &amp;lt;devices&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I used this option on 2TB disk drives and the performance and the read performance improved twofold. &lt;/p&gt;
&lt;h3&gt;How to setup a RAID10 style pool&lt;/h3&gt;
&lt;p&gt;This is how to create the ZFS equivalent of a RAID10 setup: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zfs create tank mirror &amp;lt;device 1&amp;gt; &amp;lt;device 2&amp;gt; mirror &amp;lt;device 3&amp;gt; &amp;lt;device 4&amp;gt; mirror &amp;lt;device 5&amp;gt; &amp;lt;device 6&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;How many drives should I use in a vdev&lt;/h3&gt;
&lt;p&gt;I've learned to use a 'power of two' (2,4,8,16) of drives for a vdev, plus the appropriate number of drives for the parity. RAIDZ1 = 1 disk, RAIDZ2 = 2 disks, etc.&lt;/p&gt;
&lt;p&gt;So the optimal number of drives for RAIDZ1 would be 3,5,9,17. RAIDZ2 would be 4,6,10,18 and so on. Clearly in the example above with six drives in a RAIDZ1 configuration, I'm violating this rule of thumb. &lt;/p&gt;
&lt;h3&gt;How to disable the ZIL or disable sync writes&lt;/h3&gt;
&lt;p&gt;You can expect bad throughput performance if you want to use the ZIL / honour synchronous writes. For safety reasons, ZFS does honour sync writes by default, it's an important feature of ZFS to guarantee data integrity. For storage of virtual machines or databases, you should not turn of the ZIL, but use an SSD for the SLOG to get performance to acceptable levels. &lt;/p&gt;
&lt;p&gt;For a simple (home) NAS box, the ZIL is not so important and can quite safely be disabled, as long as you have your servers on a UPS and have it cleanly shutdown when the UPS battery runs out.&lt;/p&gt;
&lt;p&gt;This is how you turn of the ZIL / support for synchronous writes:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zfs set sync=disabled &amp;lt;pool name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Disabling sync writes is especially important if you use NFS which issues sync writes by default. &lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zfs set sync=disabled tank
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;How to add an L2ARC cache device&lt;/h3&gt;
&lt;p&gt;Use &lt;a href="https://github.com/louwrentius/showtools"&gt;showdisks&lt;/a&gt; to lookup the actual /dev/disk/by-path identifier and add it like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zpool add tank cache &amp;lt;device&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zpool add tank cache pci-0000:00:1f.2-scsi-2:0:0:0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is the result (on another zpool called 'server'):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@server:~# zpool status
  pool: server
 state: ONLINE
  scan: none requested
config:

    NAME                               STATE     READ WRITE CKSUM
    server                             ONLINE       0     0     0
      raidz1-0                         ONLINE       0     0     0
        pci-0000:03:04.0-scsi-0:0:0:0  ONLINE       0     0     0
        pci-0000:03:04.0-scsi-0:0:1:0  ONLINE       0     0     0
        pci-0000:03:04.0-scsi-0:0:2:0  ONLINE       0     0     0
        pci-0000:03:04.0-scsi-0:0:3:0  ONLINE       0     0     0
        pci-0000:03:04.0-scsi-0:0:4:0  ONLINE       0     0     0
        pci-0000:03:04.0-scsi-0:0:5:0  ONLINE       0     0     0
    cache
      pci-0000:00:1f.2-scsi-2:0:0:0    ONLINE       0     0     0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;How to monitor performance / I/O statistics&lt;/h3&gt;
&lt;p&gt;One time sample:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zpool iostat
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;A sample every 2 seconds:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zpool iostat 2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;More detailed information every 5 seconds:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zpool iostat -v 5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Example output:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;                                      capacity     operations    bandwidth
pool                               alloc   free   read  write   read  write
---------------------------------  -----  -----  -----  -----  -----  -----
server                             3.54T  7.33T      4    577   470K  68.1M
  raidz1                           3.54T  7.33T      4    577   470K  68.1M
    pci-0000:03:04.0-scsi-0:0:0:0      -      -      1    143  92.7K  14.2M
    pci-0000:03:04.0-scsi-0:0:1:0      -      -      1    142  91.1K  14.2M
    pci-0000:03:04.0-scsi-0:0:2:0      -      -      1    143  92.8K  14.2M
    pci-0000:03:04.0-scsi-0:0:3:0      -      -      1    142  91.0K  14.2M
    pci-0000:03:04.0-scsi-0:0:4:0      -      -      1    143  92.5K  14.2M
    pci-0000:03:04.0-scsi-0:0:5:0      -      -      1    142  90.8K  14.2M
cache                                  -      -      -      -      -      -
  pci-0000:00:1f.2-scsi-2:0:0:0    55.9G     8M      0     70    349  8.69M
---------------------------------  -----  -----  -----  -----  -----  -----
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;How to start / stop a scrub&lt;/h3&gt;
&lt;p&gt;Start:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zfs scrub &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Stop:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zfs scrub -s &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Mount ZFS file systems on boot&lt;/h3&gt;
&lt;p&gt;Edit /etc/defaults/zfs and set this parameter:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ZFS_MOUNT=&amp;#39;yes&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;How to enable sharing a file system over NFS:&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zfs set sharenfs=on &amp;lt;poolname&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;How to create a zvol for usage with iSCSI&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zfs create -V 500G &amp;lt;poolname&amp;gt;/volume-name
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;How to force ZFS to import the pool using disk/by-path&lt;/h3&gt;
&lt;p&gt;Edit /etc/default/zfs and add&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ZPOOL_IMPORT_PATH=/dev/disk/by-path/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Links to important ZFS information sources:&lt;/h3&gt;
&lt;p&gt;Tons of information on using ZFS on Linux by Aaron Toponce:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/"&gt;https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Understanding the ZIL (ZFS Intent Log)&lt;/p&gt;
&lt;p&gt;&lt;a href="http://nex7.blogspot.nl/2013/04/zfs-intent-log.html"&gt;http://nex7.blogspot.nl/2013/04/zfs-intent-log.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Information about 4K sector alignment problems&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.opendevs.org/ritk/zfs-4k-aligned-space-overhead.html"&gt;http://www.opendevs.org/ritk/zfs-4k-aligned-space-overhead.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Important read about using the proper number of drives in a vdev&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.opendevs.org/ritk/zfs-4k-aligned-space-overhead.html"&gt;http://forums.freenas.org/threads/getting-the-most-out-of-zfs-pools.16/&lt;/a&gt;&lt;/p&gt;</content><category term="ZFS"></category><category term="ZFS"></category></entry><entry><title>Why you should not use IPsec for VPN connectivity</title><link href="https://louwrentius.com/why-you-should-not-use-ipsec-for-vpn-connectivity.html" rel="alternate"></link><published>2014-01-28T12:00:00+01:00</published><updated>2014-01-28T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-01-28:/why-you-should-not-use-ipsec-for-vpn-connectivity.html</id><summary type="html">&lt;p&gt;&lt;a href="http://en.wikipedia.org/wiki/IPsec"&gt;IPsec&lt;/a&gt; is a well-known and widely-used VPN solution. It seems that it's not widely known that Niels Ferguson and Bruce Schneier performed a &lt;a href="https://www.schneier.com/paper-ipsec.html"&gt;detailed security analysis of IPsec&lt;/a&gt; and that the results were not very positive.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;We strongly discourage the use of IPsec in its current form for protection of …&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</summary><content type="html">&lt;p&gt;&lt;a href="http://en.wikipedia.org/wiki/IPsec"&gt;IPsec&lt;/a&gt; is a well-known and widely-used VPN solution. It seems that it's not widely known that Niels Ferguson and Bruce Schneier performed a &lt;a href="https://www.schneier.com/paper-ipsec.html"&gt;detailed security analysis of IPsec&lt;/a&gt; and that the results were not very positive.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;We strongly discourage the use of IPsec in its current form for protection of any kind of valuable information, and hope that future iterations of the design will be improved.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I conveniently left out the second part:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;However, we even more strongly discourage any current alterantives, and recommend IPsec when the alternative is an insecure network. Such are the realities of the world.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;To put this in context: keep in mind that this paper was released in 2003 and the actual research may even be older (1999!). &lt;a href="http://www.openvpn.net"&gt;OpenVPN&lt;/a&gt;, an open-source SSL-based VPN solution was born in 2001 and was still maturing in 2003. So there actually was no real alternative back then.&lt;/p&gt;
&lt;p&gt;It worries me that this research done by Ferguson and Schneier is more than a decade old. I've been looking for more recent articles on the current security status of IPsec, but I couldn't find much. There have been some new RFCs been published about IPsec but I'm not familiar enough with the material to understand the implications. They make a lot of recommendations in the paper to improve IPsec security, but are they actually implemented?&lt;/p&gt;
&lt;p&gt;I did find a &lt;a href="https://www.cs.auckland.ac.nz/~pgut001/pubs/crypto_wont_help.pdf"&gt;presentation&lt;/a&gt; from 2013 by &lt;a href="http://en.wikipedia.org/wiki/Peter_Gutmann_(computer_scientist)"&gt;Peter Gutmann&lt;/a&gt; (University of Auckland). Based on his Wikipedia page, he seems to 'have some knowledge' about cryptography. The paper adresses the Snowden leaks about the NSA and also touches on IPsec. He basically relies on the paper written by Ferguson and Schneier.&lt;/p&gt;
&lt;p&gt;But let's think about this: Ferguson and Schneier criticises the design of IPsec. It is flawed by design. That's one of the worst criticisms any thing related to cryptography can get. That design has probably not changed much, from what I understand. So if their critique on IPsec is still mostly valid, all the more reason &lt;em&gt;not to use IPsec&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;So this is part of the conclusion and it doesn't beat around the bush: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;We have found serious security weaknesses in all major components of IPsec.
As always in security, there is no prize for getting 90% right; you have to get
everything right. IPsec falls well short of that target, and will require some major
changes before it can possibly provide a good level of security.
What worries us more than the weaknesses we have identified is the complexity
of the system. In our opinion, current evaluation methods cannot handle
systems of such a high complexity, and current implementation methods are not
capable of creating a secure implementation of a system as complex as this.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So if not IPsec, what should you use? I would opt to use an SSL/TLS-based VPN solution like &lt;a href="http://www.openvpn.net"&gt;OpenVPN&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;I can't vouch for the security for OpenVPN, but a well-known Dutch security firm Fox-IT has released a stripped-down version of the OpenVPN software (removed features) that they consider fit for (Dutch) governmental use. Not to say that you should use that particular OpenVPN version: the point is that OpenVPN is deemed secure enough to be used for governmental usage. For whatever that's worth.&lt;/p&gt;
&lt;p&gt;At least, SSL-based VPN solutions have the benefit that they use SSL/TLS, which may have it's own problems, but is at least not as complex as IPsec.&lt;/p&gt;</content><category term="Security"></category><category term="IPsec"></category><category term="security"></category></entry><entry><title>Achieving 450 MB/s network file transfers using Linux Bonding</title><link href="https://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html" rel="alternate"></link><published>2014-01-07T01:00:00+01:00</published><updated>2014-01-07T01:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2014-01-07:/achieving-450-mbs-network-file-transfers-using-linux-bonding.html</id><summary type="html">&lt;h3&gt;Linux Bonding&lt;/h3&gt;
&lt;p&gt;In this article I'd like to show the results of using regular 1 Gigabit network connections to achieve 450 MB/s file transfers over NFS. &lt;/p&gt;
&lt;p&gt;I'm &lt;a href="https://louwrentius.com/linux-network-interface-bonding-trunking-or-how-to-get-beyond-1-gbs.html"&gt;again using Linux interface bonding&lt;/a&gt; for this purpose. &lt;/p&gt;
&lt;p&gt;Linux interface bonding can be used to create a virtual network interface with the …&lt;/p&gt;</summary><content type="html">&lt;h3&gt;Linux Bonding&lt;/h3&gt;
&lt;p&gt;In this article I'd like to show the results of using regular 1 Gigabit network connections to achieve 450 MB/s file transfers over NFS. &lt;/p&gt;
&lt;p&gt;I'm &lt;a href="https://louwrentius.com/linux-network-interface-bonding-trunking-or-how-to-get-beyond-1-gbs.html"&gt;again using Linux interface bonding&lt;/a&gt; for this purpose. &lt;/p&gt;
&lt;p&gt;Linux interface bonding can be used to create a virtual network interface with the aggregate bandwidth of all the interfaces added to the bond. Two gigabit network interfaces will give you - guess what - two gigabit or ~220 MB/s of bandwidth. &lt;/p&gt;
&lt;p&gt;This bandwidth can be used by a single TCP-connection.&lt;/p&gt;
&lt;p&gt;So how is this achieved? The Linux bonding kernel module has special bonding mode: mode 0 or round-robin bonding. In this mode, the kernel will stripe packets across the interfaces in the 'bond' like RAID 0 with hard drives. As with RAID 0, you get additional performance with each device you add. &lt;/p&gt;
&lt;p&gt;So I've added HP NC364T quad-port network cards to my servers. Thus each server has a theoretical bandwidth of 4 Gigabit. These HP network cards cost just 145 Euro and I even found the card for 105 Dollar on &lt;a href="http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Delectronics&amp;amp;field-keywords=NC364T&amp;amp;rh=n%3A172282%2Ck%3ANC364T"&gt;Amazon&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img alt="hp quad port nic" src="https://louwrentius.com/static/images/hpnic.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;With two servers, you just need four UTP cable's to connect the two network interfaces and you're done. This would cost you ~300 Euro or ~200 dollar in total. &lt;/p&gt;
&lt;p&gt;If you want to connect additional servers, you need a managed gigabit switch with VLAN-support and sufficient ports. Each additional server will use 4 ports on the switch, excluding interfaces for remote access and other purposes. &lt;/p&gt;
&lt;p&gt;Managed gigabit switches are quite inexpensive these days. I bought a 24 port switch: &lt;a href="http://www8.hp.com/ca/en/products/networking-switches/product-detail.html?oid=5304944"&gt;HP 1810-24G v2 (J9803A)&lt;/a&gt; for about 180 euros (209 Dollars on Newegg) and it can even be rack-mounted. &lt;/p&gt;
&lt;p&gt;&lt;img alt="switch" src="https://louwrentius.com/static/images/hpswitch.png" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Using VLANS&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;So you can't just use a gigabit switch and connect all these quad-port network cards to a single VLAN. I tested this scenario first and only got a maximum transfer speed of 270 MB/s while copying a file between servers over NFS.&lt;/p&gt;
&lt;p&gt;The trick is to create a &lt;em&gt;separate VLAN for every network port&lt;/em&gt;. So if you use a quad-port network card, you need four VLANs. Also, you must make sure that every port on every network card is in the same VLAN. For example, port 1 on every card needs to be in VLAN 21, port 2 in VLAN 22, and so on. You also must add the appropriate switch port to the correct VLAN. Last, you must add the network interfaces to the bond in the right order. &lt;/p&gt;
&lt;p&gt;&lt;img alt="bondingschema" src="https://louwrentius.com/static/images/bondingschema.png" /&gt;&lt;/p&gt;
&lt;p&gt;So why do you need to use VLANs? The reason is quite simple. Bonding works by spoofing the same hardware or MAC-address on all interfaces. So the switch sees the same hardware address on four ports, and thus gets confused. To which port should the packet be sent?&lt;/p&gt;
&lt;p&gt;If you put each port in it's own VLAN, the 'spoofed' MAC-address is seen only once in each VLAN. So the switch won't be confused. What you are in fact doing by creating VLANs is creating four separate switches. So if you have - for example - four cheap 8-port unmanaged gigabit switches, this would work too.&lt;/p&gt;
&lt;p&gt;So assuming that you have four ethernet interfaces, this is an example of how you can create the bond:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ifenslave bond0 eth1 eth2 eth3 eth4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Next, you just assign an IP-address to the bond0 interface, like you would with a regular eth(x) interface. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ifconfig bond0 192.168.2.10 netmask 255.255.255.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Up to this point, I only achieved about 350 MB/s. I needed to enable &lt;em&gt;jumbo frames&lt;/em&gt; on all interfaces and the switch to achieve 450 MB/s. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ifconfig bond0 mtu 9000 up
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Next, you can just mount any NFS share over the interface and start copying files. That's all.&lt;/p&gt;
&lt;p&gt;Once I added each interface to the appropriate VLAN, I got about 450 MB/s for a single file copy with 'cp' over NFS.&lt;/p&gt;
&lt;p&gt;&lt;img alt="450 MB/s" src="https://louwrentius.com/static/images/450MB.png" /&gt;&lt;/p&gt;
&lt;p&gt;I did not perform a 'cp' but a 'dd' because I don't have a disk array fast enough (yet) that can write at 450 MB/s. &lt;/p&gt;
&lt;p&gt;So for three servers, this solution will cost me 600 Euro or 520 Dollar.     &lt;/p&gt;
&lt;h3&gt;What about LAGs and LACP?&lt;/h3&gt;
&lt;p&gt;Don't configure your clients or switches for LACP, it doesn't give you the speed benefit and it's not required.     &lt;/p&gt;
&lt;h3&gt;10 Gigabit ethernet?&lt;/h3&gt;
&lt;p&gt;Frankly, it's quite expensive if you need to connect more than two servers. An entry-level 10Gbe NIC like the &lt;a href="http://ark.intel.com/products/58953/Intel-Ethernet-Converged-Network-Adapter-X540-T1"&gt;Intel X540-T1&lt;/a&gt; does about 300 Euros or 450 Dollar. This card allows you to use Cat 6e UTP Cabling. (Pricing from Dutch Webshops in Euros and Newegg in Dollars).&lt;/p&gt;
&lt;p&gt;With two of those, you can have 10Gbit ethernet between two servers for 600 Euro or 700 Dollar. If you need to connect more servers, you would need a switch. The problem is that 10Gbit switches are not cheap. An 8-port unmanaged switch from Netgear (ProSAFE Plus XS708E) does about 720 Euro's or 900 Dollar. &lt;/p&gt;
&lt;p&gt;If you want to connect three servers, you need three network cards and a switch. So three network cards and a switch will cost you  900 Euro (1050 Dollar) for the network cards and 720 Euro (900 Dollar) for the switch, totalling 1800 euro or 1950 Dollar. &lt;/p&gt;
&lt;p&gt;You will get higher transfer speeds, but at a significantly higher price.&lt;/p&gt;
&lt;p&gt;For many business purposes this higher price can be easily justified and I would select 10 Gb over 1 Gb bonding in a heart-beat. Less cables, higher performance, lower latency. &lt;/p&gt;
&lt;p&gt;However, bonding gigabit interfaces allows you to use off-the-shelf equipment and maybe a nice compromis between cost, usability and performance. &lt;/p&gt;
&lt;h3&gt;Operating System Support&lt;/h3&gt;
&lt;p&gt;As far as I'm aware, round-robin bonding is only supported on Linux. Other operating systems do not support it.&lt;/p&gt;</content><category term="Networking"></category></entry><entry><title>Things you should consider when building a ZFS NAS</title><link href="https://louwrentius.com/things-you-should-consider-when-building-a-zfs-nas.html" rel="alternate"></link><published>2013-12-29T12:00:00+01:00</published><updated>2013-12-29T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-12-29:/things-you-should-consider-when-building-a-zfs-nas.html</id><summary type="html">&lt;p&gt;ZFS is a modern file system designed by Sun Microsystems, targeted at enterprise environments. Many features of ZFS also appeal to home NAS builders and with good reason. But not all features are relevant or necessary for home use. &lt;/p&gt;
&lt;p&gt;I believe that most &lt;em&gt;home users&lt;/em&gt; building their own NAS, are …&lt;/p&gt;</summary><content type="html">&lt;p&gt;ZFS is a modern file system designed by Sun Microsystems, targeted at enterprise environments. Many features of ZFS also appeal to home NAS builders and with good reason. But not all features are relevant or necessary for home use. &lt;/p&gt;
&lt;p&gt;I believe that most &lt;em&gt;home users&lt;/em&gt; building their own NAS, are just looking for a way to create a large centralised storage pool. As long as the solution can saturate gigabit ethernet, performance is not much of an issue. Workloads are typically single-client and sequential in nature. &lt;/p&gt;
&lt;p&gt;If this description rings true for your environment, there are some options regarding ZFS that are often very popular but not very relevant to you. 
Furthermore, there are also some facts that you should take into account when preparing for your own NAS build.&lt;/p&gt;
&lt;h3&gt;Expanding your pool may not be as simple as you think&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Added August 2015&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you are familiar with regular hardware or software RAID, you might expect to use on-line capacity expansion, where you just grow a RAID array with extra drives as you see fit. Many hardware RAID cards support this feature and Linux software RAID (MDADM) supports this too. This is very economical: just add an extra drive and you gain some space. Very flexible and simple. &lt;/p&gt;
&lt;p&gt;But ZFS does not support this approach. ZFS requires you to create a new 'RAID array' and add it to the pool. So you will lose extra drives to redundancy. &lt;/p&gt;
&lt;p&gt;To be more presise: you &lt;em&gt;cannot expand VDEVs&lt;/em&gt;. You can only &lt;em&gt;add&lt;/em&gt; VDEVS to a pool. And each VDEV requires it's own redundancy. &lt;/p&gt;
&lt;p&gt;So if you start  with a single 6-disk RAIDZ2 you may end up with two 6-disk RAIDZ2 VDEVS. This means you use 4 out of 12 drives for redundancy. If you would have started out with a 10-disk RAIDZ2, you would only lose 2 drives to redundancy. Example:&lt;/p&gt;
&lt;p&gt;A: 2 x 6-disk RAIDZ2 consisting of 4 TB drives = 12 disks - 4 redundancy = 8 x 4 = 32 TB netto capacity. &lt;/p&gt;
&lt;p&gt;B: 1 x 10-disk RAIDZ2 consisting of 4 TB drives = 10 disks - 2 redundancy = 8 x 2 = 32 TB netto capacity. &lt;/p&gt;
&lt;p&gt;Option A lost you two drives at 150$ = 300$ and also requires 2 extra SATA ports and chassis slots. &lt;/p&gt;
&lt;p&gt;Option B will cost you 4 extra drives upfront, space you may not need immediately. &lt;/p&gt;
&lt;p&gt;Also take into account that there is such a thing as a 'recommended number of disks in a VDEV' depending on the redundancy used. This is discussed further down.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; there is a 'trick' how you can expand a VDEV. If you replace every drive with a larger one, you can then resize the VDEV and the pool. So replacing 2 TB drives with 4 TB drives would double your capacity without adding an extra VDEV.&lt;/p&gt;
&lt;p&gt;This approach requires a full VDEV rebuild after each drive replacement. So you may understand that this takes quite some time, during which your are running on no (RAIDZ) or less (RAIDZ2) redundancy. But it does work. &lt;/p&gt;
&lt;p&gt;If you have an additional spare SAS/SATA port and power, you can keep the redundancy and do an 'on-line' replace of the drive. This way, you don't lose or reduce redundancy during a rebuild. This is relatively ideal if you also have room for an additional drive in the chassis.  &lt;/p&gt;
&lt;p&gt;This can be quite some work if you do have an available SATA port, but no additional drive slots. You will have to open the chassis, find a temporary spot for the new drive and then after the rebuild, move the new drive into the slot of the old one.&lt;/p&gt;
&lt;p&gt;There is a general recommendation not to mix different VDEV sizes in a pool, but for home usage, this is not an issue. So you could - for example - expand a pool based on a 6-drive VDEV with an additional 4-drive VDEV RAIDZ2. &lt;/p&gt;
&lt;p&gt;Remember: lose one VDEV and you lose your entire pool. So I would not recommend mixing RAIDZ and RAIDZ2 VDEVs in a pool.  &lt;/p&gt;
&lt;h3&gt;You don't need a SLOG for the ZIL&lt;/h3&gt;
&lt;p&gt;Quick recap: the &lt;a href="https://blogs.oracle.com/realneel/entry/the_zfs_intent_log"&gt;ZIL&lt;/a&gt; or ZFS intent Log is - as I understand it - only relevant for 
synchronous writes. If data integrity is important to an application, like a database server or a virtual machine, writes are performed synchronous. The application wants to make sure that the data is actually stored on the physical storage media and it waits for a confirmation from ZFS that it has done so. Only then will it continue.&lt;/p&gt;
&lt;p&gt;Asynchronous writes on the contrary, never hit the ZIL. They are just cached in RAM and directly written to the VDEV in one sequential swoop when the next transaction group commit will be performed (currently by default every 5 seconds). In the mean time, the application gets a confirmation of ZFS that the data is stored (a white lie) and just continues where it left off. ZFS just caches the write in memory and actually write the data to the storage VDEV when it feels like it (fifo).&lt;/p&gt;
&lt;p&gt;As you may understand, asynchronous writes are way faster because they can be cached and ZFS can reorder the I/O to make it more sequential and prevent random I/O from hitting the VDEV. This is what I understood from &lt;a href="http://nex7.blogspot.nl/2013/04/zfs-intent-log.html"&gt;this source&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;So if you encounter synchronous writes, they must be committed to the ZIL (thus VDEV) and this causes random I/O patterns on the VDEV, degrading performance significantly. &lt;/p&gt;
&lt;p&gt;The cool thing about ZFS is that it does provide the option to store the ZIL on a dedicated device called the SLOG. This doesn't do anything for performance by itself, but the secret ingredient is using a solid state drive as the SLOG, ideally in a mirror to insure data integrity and to maintain performance in the case of a SLOG device failure. &lt;/p&gt;
&lt;p&gt;For business critical environments, a separate SLOG device based on SSDs is a no-brainer. But for home use? If you don't have a SLOG, you still have a ZIL, it's only not as fast. That's not a real problem for single-client sequential throughput. &lt;/p&gt;
&lt;p&gt;For home usage, you may even consider how much you care about data integrity. That sounds strange, but the ZIL is used to recover from the event of a sudden power-loss. If your NAS is attached to a UPS, this is not much of a risk, you can perform a controlled shutdown before the batteries run out of power. The remaining risk is human error or some other catastrophic event within your NAS. &lt;/p&gt;
&lt;p&gt;So all data in rest already stored on your NAS is never at risk. It's only data that is in the process of being committed to storage that may get scrambled. But again: this is a home situation. Maybe restart your file transfer and you are done. You still have a copy of the data on the source device. This is entirely different from a setup with databases or virtual machines. &lt;/p&gt;
&lt;p&gt;Data integrity of data at rest is vitally important. The ZIL only protects data in transit. It has nothing to do with the data already committed to the VDEV.&lt;/p&gt;
&lt;p&gt;I see so many NAS builders being talked into buying some specific SSDs to be used for the ZIL whereas they probably won't benefit from them at all, it's just too bad. &lt;/p&gt;
&lt;h3&gt;You don't need L2ARC cache&lt;/h3&gt;
&lt;p&gt;ZFS relies heavily on caching of data to deliver decent performance, especially read performance. RAM provides the fasted cache and that is where the first level of caching lives, the ARC (Adaptive Replacement Cache). ZFS is smart and learns which data is often requested and keeps it in the ARC. &lt;/p&gt;
&lt;p&gt;But the size of the ARC is limited by the amount of RAM available. This is why you can add a second cache tier, based on SSDs. SSDs are not as fast as RAM, but still way faster than spinning disks. And they are cheaper compared to RAM memory if you look at their capacity.&lt;/p&gt;
&lt;p&gt;For additional more detailed information, go to &lt;a href="http://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/"&gt;this site&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;L2ARC is important when you have multiple users or VMs accessing the same data sets. In this case, L2ARC based on SSDs will improve performance significantly. But if we just take a look at the average home NAS build, I'm not sure how the L2ARC adds any benefit. ZFS has no problem with single-client sequential file transfers so there is no benefit in implementing a L2ARC. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Update 2015-02-08&lt;/em&gt;: There is even a downside to having a L2ARC cache. All the meta-data regarding data stored in the L2ARC cache is kept in memory, and thus eating away at your ARC!, thus your ARC becomes less effective &lt;a href="http://www.accs.com/p_and_p/ZFS/ZFS.PDF"&gt;(source)&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;You don't need deduplication and compression&lt;/h3&gt;
&lt;p&gt;For the home NAS, most data you store on it is already highly compressed and additional compression only wastes performance (Music, Videos, etc). It is a cool feature, but not so much for home use. If you are planning to store other types of data, compression actually may be of interest (documents, backups of VMs, etc). It is &lt;a href="http://wiki.illumos.org/display/illumos/LZ4+Compression"&gt;suggested by many&lt;/a&gt; (and in the comments) that with LZ4 compression, you don't lose performance (except for some CPU cycles) and with compressible data, you even gain performance, so you could just enable it and forget about it. &lt;/p&gt;
&lt;p&gt;Whereas compression may do not much harm, Deduplication is often more relevant in business environments where users are sloppy and store multiple copies of the same data at different locations. I'm quite sure you don't want to sacrifice RAM and performance for ZFS to keep track of duplicates you probably don't have.&lt;/p&gt;
&lt;h3&gt;You don't need an ocean of RAM&lt;/h3&gt;
&lt;p&gt;The absolute minimum RAM for a viable ZFS setup is 4 GB but there is not a lot of headroom for ZFS here. ZFS is quite memory hungry because it uses RAM as a buffer so it can perform operations like checksums and reorder all I/O to be sequential. &lt;/p&gt;
&lt;p&gt;If you don't have sufficient buffer memory, performance will suffer. 8 GB is probably sufficient for most arrays. If your array is faster, more memory may be required to actually benefit from this performance. For maximum performance, you should have enough memory to hold 5 seconds worth of maximum write throughput ( 5 x 400MB/s = 2GB ) and leave sufficient headroom for other ZFS RAM requirements. In the example, 4 GB RAM could be sufficient. &lt;/p&gt;
&lt;p&gt;For most home users, saturating gigabit is already sufficient so you might be safe with 8 GB of RAM in most cases. More RAM may not provide much more benefit, but it will increase power consumption.&lt;/p&gt;
&lt;p&gt;There is an often cited rule that you need 1 GB of RAM for every TB of storage, but this is not true for home NAS solutions. This is only relevant for high-performance multi-user or multi-VM environments. &lt;/p&gt;
&lt;p&gt;Additional information about RAM requirements can be found &lt;a href="http://doc.freenas.org/index.php/Hardware_Recommendations"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;You do need ECC RAM if you care about data integrity&lt;/h3&gt;
&lt;p&gt;The money saved on a ZIL or L2ARC cache can be better spend on ECC RAM memory. &lt;/p&gt;
&lt;p&gt;ZFS does not rely on the quality of individual disks. It uses parity to verify that disks don't lie about the data stored on them (data corruption). &lt;/p&gt;
&lt;p&gt;But ZFS can't verify the contents of RAM memory, so here ZFS relies on the reliability of the hardware. And there is a reason why we use RAID or redundant power suplies in our server equipment: hardware fails. RAM fails too. This is the reason why every server product by well-known vendors like HP, Dell, IBM and Supermicro only support ECC memory. 
&lt;a href="http://www.zdnet.com/blog/storage/dram-error-rates-nightmare-on-dimm-street/638"&gt;RAM memory errors do occur more frequent than you may think&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;ECC (Error Checking and Correcting) RAM corrects and detects RAM errors. This is the only way you can be fairly sure that ZFS is not fed with corrupted data. Keep in mind: with bad RAM, it is likely that corrupted data will be written to disk without ZFS ever being aware of it (garbage in - garbage out).&lt;/p&gt;
&lt;p&gt;Please note that the quality of your RAM memory will not directly affect any data that is at rest and already stored on your disks. Existing data will only be corrupted with bad RAM if it is modified or moved around. ZFS will probably detect checksum errors, but it will be too late by then...&lt;/p&gt;
&lt;p&gt;To me, it's simple. If you care enough about your data that you want to use ZFS, you should also be willing to pay for ECC memory. You are giving yourself a false sense of security if you do not use ECC memory. ZFS was never designed for consumer hardware, it was destined to be used on server hardware using ECC memory. Because it was designed with data integrity as the top most priority.&lt;/p&gt;
&lt;p&gt;There are entry-level servers that do support ECC memory and can be had fairly cheap with 4 hard drive bays, like the &lt;a href="http://www8.hp.com/us/en/products/proliant-servers/product-detail.html?oid=5379860"&gt;HP ProLiant MicroServer Gen8&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I wrote an &lt;a href="https://louwrentius.com/affordable-server-with-server-grade-hardware-part-ii.html"&gt;article&lt;/a&gt; about a reasonably priced CPU+RAM+MB combo that does support ECC memory starting at $360.&lt;/p&gt;
&lt;p&gt;If you feel lucky, go for good-quality non-ECC memory. But do understand that you are taking a risk here. &lt;/p&gt;
&lt;h3&gt;Understanding random I/O performance&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Added August 2015&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;With ZFS, the rule of thumb is this: regardless of the number of drives in a RAIDZ(2/3) VDEV, you always get roughly the random I/O performance of a &lt;em&gt;single&lt;/em&gt; drive in the VDEV&lt;sup id="fnref:singledrive"&gt;&lt;a class="footnote-ref" href="#fn:singledrive"&gt;1&lt;/a&gt;&lt;/sup&gt;. &lt;/p&gt;
&lt;p&gt;Now I want to make the case here that if you are building your own home NAS, you shouldn't care about random I/O performance too much. &lt;/p&gt;
&lt;p&gt;If you want better random I/O performance of your pool, the way to get it is to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;add more VDEVS to your pool&lt;/li&gt;
&lt;li&gt;add more RAM/L2ARC for caching&lt;/li&gt;
&lt;li&gt;use disks with higher RPM or SSDs combined with option 1. &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Regarding point 1:&lt;/p&gt;
&lt;p&gt;So if you want the best random I/O performance, you should just use a ton of mirrored drives in the VDEV, so you essentially create a large RAID 10. 
This is not very space-efficient, so probably not so relevant in the context of a home NAS.&lt;/p&gt;
&lt;p&gt;Example similar to RAID 10:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@bunny:~# zfs list
NAME       USED  AVAIL  REFER  MOUNTPOINT
testpool  59.5K  8.92T    19K  /testpool

root@bunny:~# zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    testpool    ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sdd     ONLINE       0     0     0
      mirror-1  ONLINE       0     0     0
        sde     ONLINE       0     0     0
        sdf     ONLINE       0     0     0
      mirror-2  ONLINE       0     0     0
        sdg     ONLINE       0     0     0
        sdh     ONLINE       0     0     0
      mirror-3  ONLINE       0     0     0
        sdi     ONLINE       0     0     0
        sdj     ONLINE       0     0     0
      mirror-4  ONLINE       0     0     0
        sdk     ONLINE       0     0     0
        sdl     ONLINE       0     0     0
      mirror-5  ONLINE       0     0     0
        sdm     ONLINE       0     0     0
        sdn     ONLINE       0     0     0
      mirror-6  ONLINE       0     0     0
        sdo     ONLINE       0     0     0
        sdp     ONLINE       0     0     0
      mirror-7  ONLINE       0     0     0
        sdq     ONLINE       0     0     0
        sdr     ONLINE       0     0     0
      mirror-8  ONLINE       0     0     0
        sds     ONLINE       0     0     0
        sdt     ONLINE       0     0     0
      mirror-9  ONLINE       0     0     0
        sdu     ONLINE       0     0     0
        sdv     ONLINE       0     0     0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Another option, if you need better storage efficiency, is to use multiple RAIDZ or RAIDZ2 VDEVS in the pool. In a way, you're then creating the equivalent of a RAID50 or RAID60. &lt;/p&gt;
&lt;p&gt;Example similar to RAID 50:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@bunny:~# zfs list
NAME       USED  AVAIL  REFER  MOUNTPOINT
testpool  77.5K  14.3T  27.2K  /testpool

root@bunny:~# zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
testpool    ONLINE       0     0     0
  raidz1-0  ONLINE       0     0     0
    sdc     ONLINE       0     0     0
    sdd     ONLINE       0     0     0
    sde     ONLINE       0     0     0
    sdf     ONLINE       0     0     0
    sdg     ONLINE       0     0     0
  raidz1-1  ONLINE       0     0     0
    sdh     ONLINE       0     0     0
    sdi     ONLINE       0     0     0
    sdj     ONLINE       0     0     0
    sdk     ONLINE       0     0     0
    sdl     ONLINE       0     0     0
  raidz1-2  ONLINE       0     0     0
    sdm     ONLINE       0     0     0
    sdn     ONLINE       0     0     0
    sdo     ONLINE       0     0     0
    sdp     ONLINE       0     0     0
    sdq     ONLINE       0     0     0
  raidz1-3  ONLINE       0     0     0
    sdr     ONLINE       0     0     0
    sds     ONLINE       0     0     0
    sdt     ONLINE       0     0     0
    sdu     ONLINE       0     0     0
    sdv     ONLINE       0     0     0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You only need to deploy these kinds of pool/vdev configuratoins if you have valid reason that you need the random I/O performance they provide. Creating less but larger VDEVs is often more space efficient and will still saturate gigabit when transferring large files. &lt;/p&gt;
&lt;h3&gt;It's ok to use multiple VDEVs of different drive sizes&lt;/h3&gt;
&lt;p&gt;This only true in the context of a home NAS.  &lt;/p&gt;
&lt;p&gt;Let's take an example. You have an existing pool consisting of a single RAIDZ VDEV with 4 x 2 TB drives and your pool is filling up. &lt;/p&gt;
&lt;p&gt;It's then perfectly fine in the context of a home NAS to add a second VDEV consisting of a  5 x 4 TB RAIDZ. &lt;/p&gt;
&lt;p&gt;ZFS will take care of how data is distributed across the VDEVs. &lt;/p&gt;
&lt;p&gt;It is &lt;em&gt;NOT&lt;/em&gt; recommended to mix different RAIDZ schemas, so VDEV 1 = RAIDZ and VDEV 2 = RAIDZ2. Remember that losing a single VDEV = losing the whole pool. It doesn't make sense to mix redundancy levels. &lt;/p&gt;
&lt;h3&gt;VDEVs should consist of the optimal number of drives&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Added August 2015:&lt;/em&gt; If you use the large_blocks feature and use 1MB records, you don't need to adhere to the rule of always putting a certain number of drives in a VDEV to prevent significant loss of storage capacity. &lt;/p&gt;
&lt;p&gt;This enables you to create an 8-drive RAIDZ2 where normally you would have to create either a RAIDZ2 VDEV that consists of 6 drives or 10 drives. &lt;/p&gt;
&lt;p&gt;For home use, expanding storage by adding VDEVs is often suboptimal because you may spend more disks on redundancy than required, as explained earlier. The support of large_blocks allows you to buy the number of disks upfront that suits current and future needs.&lt;/p&gt;
&lt;p&gt;In my own personal case, with my 19" chassis filled with 24 drives, I would enable the large_blocks feature and create a single 24-drive RAID-Z3 VDEV to give me optimal space and still very good redundancy.&lt;/p&gt;
&lt;p&gt;The large_blocks feature is supported on ZFS on Linux since version 0.6.5 (September 2015).&lt;/p&gt;
&lt;p&gt;Thanks to user "SirMaster" on Reddit for introducing this feature to me.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Original advice: &lt;/p&gt;
&lt;p&gt;Depending on the type of 'RAID' you may choose for the VDEV(s) in your ZFS pool, you might want to make sure you only put in the right number of disks in the VDEV. &lt;/p&gt;
&lt;p&gt;This is important, if you don't use the right amount, performance will suffer, but more importantly: you will lose storage space, which can ad up to over 10% of the available capacity. That's quite a waste. &lt;/p&gt;
&lt;p&gt;This is a straight copy&amp;amp;paste from &lt;a href="http://forums.anandtech.com/showthread.php?p=35760300"&gt;sub.mesa's post&lt;/a&gt; &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;The following ZFS pool configurations are optimal for modern 4K sector harddrives:
RAID-Z: 3, 5, 9, 17, 33 drives
RAID-Z2: 4, 6, 10, 18, 34 drives
RAID-Z3: 5, 7, 11, 19, 35 drives
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Sub.mesa also explains the details on why this is true. And &lt;a href="http://www.opendevs.org/ritk/zfs-4k-aligned-space-overhead.html"&gt;here&lt;/a&gt; is another example.&lt;/p&gt;
&lt;p&gt;The gist is that you must use a power of two for your data disks and then add the number of parity disks required for your RAIDZ level on top of that. So 4 data disks + 1 parity disk (RAIDZ) is a total of 5 disks. Or 16 data disks + 2 parity disks (RAIDZ2) is 18 disks in the VDEV.&lt;/p&gt;
&lt;p&gt;Take this into account when deciding on your pool configuration. Also, RAIDZ2 is absolutely recommended. with more than 6-8 disks. The risk of losing a second drive during 'rebuild' (resilvering) is just too high with current high-density drives.&lt;/p&gt;
&lt;h3&gt;You don't need to limit the number of data disks in a VDEV&lt;/h3&gt;
&lt;p&gt;For home use, creating larger VDEVs is not an issue, even an 18 disk VDEV is probably fine, but don't expect any significant random I/O performance. It is always recommended to use multiple smaller VDEVs to increase random I/O performance (at the cost of capacity lost to parity) as ZFS does stripe I/O-requests across VDEVs. If you are building a home NAS, random I/O is probably not very relevant. &lt;/p&gt;
&lt;h3&gt;You don't need to run ZFS at home&lt;/h3&gt;
&lt;p&gt;ZFS is cool technology and it's perfectly fine to run ZFS at home. However, &lt;a href="https://louwrentius.com/should-i-use-zfs-for-my-home-nas.html"&gt;the world doesn't end if you don't&lt;/a&gt;. &lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:singledrive"&gt;
&lt;p&gt;https://blogs.oracle.com/roch/entry/when_to_and_not_to&amp;#160;&lt;a class="footnote-backref" href="#fnref:singledrive" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="ZFS"></category><category term="ZFS"></category><category term="Storage"></category></entry><entry><title>ZFS on Linux: monitor cache hit ratio</title><link href="https://louwrentius.com/zfs-on-linux-monitor-cache-hit-ratio.html" rel="alternate"></link><published>2013-12-23T12:00:00+01:00</published><updated>2013-12-23T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-12-23:/zfs-on-linux-monitor-cache-hit-ratio.html</id><summary type="html">&lt;p&gt;I'm performing some FIO random read 4k I/O benchmarks on a ZFS file system. So since I didn't trust the numbers I got, I wanted to know how many of the IOPs I got were due to cache hits rather than disk hits. &lt;/p&gt;
&lt;p&gt;This is why I wrote a …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I'm performing some FIO random read 4k I/O benchmarks on a ZFS file system. So since I didn't trust the numbers I got, I wanted to know how many of the IOPs I got were due to cache hits rather than disk hits. &lt;/p&gt;
&lt;p&gt;This is why I wrote a &lt;a href="https://louwrentius.com/files/architratio.sh"&gt;small shell script called archhitratio&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Sample output: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;IOPs: 133 | ARC cache hit ratio: 48.00 % | Hitrate: 64 / Missrate: 69
IOPs: 131 | ARC cache hit ratio: 48.00 % | Hitrate: 63 / Missrate: 68
IOPs: 136 | ARC cache hit ratio: 49.00 % | Hitrate: 67 / Missrate: 69
IOPs: 128 | ARC cache hit ratio: 46.00 % | Hitrate: 59 / Missrate: 69
IOPs: 127 | ARC cache hit ratio: 46.00 % | Hitrate: 59 / Missrate: 68
IOPs: 135 | ARC cache hit ratio: 48.00 % | Hitrate: 65 / Missrate: 70
IOPs: 127 | ARC cache hit ratio: 45.00 % | Hitrate: 58 / Missrate: 69
IOPs: 125 | ARC cache hit ratio: 44.00 % | Hitrate: 56 / Missrate: 69
IOPs: 128 | ARC cache hit ratio: 46.00 % | Hitrate: 60 / Missrate: 68
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this example, I'm performing a random read test on a 16GB data set. This host has 16 GB RAM and 6 GB of this dataset was already in memory from previous FIO runs. This is why we see a ~45% hit ratio. &lt;/p&gt;
&lt;p&gt;This is a more interesting result:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;IOPs: 1404 | ARC cache hit ratio: 90.0 % | Hitrate: 1331 / Missrate: 73
IOPs: 1425 | ARC cache hit ratio: 90.0 % | Hitrate: 1350 / Missrate: 75
IOPs: 1395 | ARC cache hit ratio: 90.0 % | Hitrate: 1323 / Missrate: 72
IOPs: 1740 | ARC cache hit ratio: 90.0 % | Hitrate: 1664 / Missrate: 76
IOPs: 1351 | ARC cache hit ratio: 90.0 % | Hitrate: 1277 / Missrate: 74
IOPs: 1613 | ARC cache hit ratio: 90.0 % | Hitrate: 1536 / Missrate: 77
IOPs: 1920 | ARC cache hit ratio: 90.0 % | Hitrate: 1845 / Missrate: 75
IOPs: 1431 | ARC cache hit ratio: 90.0 % | Hitrate: 1354 / Missrate: 77
IOPs: 1675 | ARC cache hit ratio: 90.0 % | Hitrate: 1598 / Missrate: 77
IOPs: 1560 | ARC cache hit ratio: 90.0 % | Hitrate: 1484 / Missrate: 76
IOPs: 1574 | ARC cache hit ratio: 90.0 % | Hitrate: 1500 / Missrate: 74
IOPs: 2017 | ARC cache hit ratio: 90.0 % | Hitrate: 1946 / Missrate: 71
IOPs: 1696 | ARC cache hit ratio: 90.0 % | Hitrate: 1623 / Missrate: 73
IOPs: 1776 | ARC cache hit ratio: 90.0 % | Hitrate: 1702 / Missrate: 74
IOPs: 1671 | ARC cache hit ratio: 90.0 % | Hitrate: 1597 / Missrate: 74
IOPs: 1729 | ARC cache hit ratio: 90.0 % | Hitrate: 1656 / Missrate: 73
IOPs: 1902 | ARC cache hit ratio: 90.0 % | Hitrate: 1828 / Missrate: 74
IOPs: 2029 | ARC cache hit ratio: 90.0 % | Hitrate: 1956 / Missrate: 73
IOPs: 2228 | ARC cache hit ratio: 90.0 % | Hitrate: 2161 / Missrate: 67
IOPs: 2289 | ARC cache hit ratio: 90.0 % | Hitrate: 2216 / Missrate: 73
IOPs: 2385 | ARC cache hit ratio: 90.0 % | Hitrate: 2277 / Missrate: 108
IOPs: 2595 | ARC cache hit ratio: 90.0 % | Hitrate: 2524 / Missrate: 71
IOPs: 2940 | ARC cache hit ratio: 90.0 % | Hitrate: 2872 / Missrate: 68
IOPs: 2984 | ARC cache hit ratio: 90.0 % | Hitrate: 2872 / Missrate: 112
IOPs: 2622 | ARC cache hit ratio: 90.0 % | Hitrate: 2385 / Missrate: 237
IOPs: 1518 | ARC cache hit ratio: 90.0 % | Hitrate: 1461 / Missrate: 57
IOPs: 3221 | ARC cache hit ratio: 90.0 % | Hitrate: 3150 / Missrate: 71
IOPs: 3745 | ARC cache hit ratio: 90.0 % | Hitrate: 3674 / Missrate: 71
IOPs: 3363 | ARC cache hit ratio: 90.0 % | Hitrate: 3292 / Missrate: 71
IOPs: 3931 | ARC cache hit ratio: 90.0 % | Hitrate: 3856 / Missrate: 75
IOPs: 3765 | ARC cache hit ratio: 90.0 % | Hitrate: 3689 / Missrate: 76
IOPs: 4845 | ARC cache hit ratio: 90.0 % | Hitrate: 4772 / Missrate: 73
IOPs: 4422 | ARC cache hit ratio: 90.0 % | Hitrate: 4350 / Missrate: 72
IOPs: 5602 | ARC cache hit ratio: 90.0 % | Hitrate: 5531 / Missrate: 71
IOPs: 5351 | ARC cache hit ratio: 90.0 % | Hitrate: 5279 / Missrate: 72
IOPs: 6075 | ARC cache hit ratio: 90.0 % | Hitrate: 6004 / Missrate: 71
IOPs: 6586 | ARC cache hit ratio: 90.0 % | Hitrate: 6515 / Missrate: 71
IOPs: 7974 | ARC cache hit ratio: 90.0 % | Hitrate: 7907 / Missrate: 67
IOPs: 4434 | ARC cache hit ratio: 90.0 % | Hitrate: 4180 / Missrate: 254
IOPs: 9793 | ARC cache hit ratio: 90.0 % | Hitrate: 9721 / Missrate: 72
IOPs: 9395 | ARC cache hit ratio: 90.0 % | Hitrate: 9300 / Missrate: 95
IOPs: 6171 | ARC cache hit ratio: 90.0 % | Hitrate: 6089 / Missrate: 82
IOPs: 9209 | ARC cache hit ratio: 90.0 % | Hitrate: 9142 / Missrate: 67
IOPs: 14883 | ARC cache hit ratio: 90.0 % | Hitrate: 14817 / Missrate: 66
IOPs: 11304 | ARC cache hit ratio: 90.0 % | Hitrate: 11152 / Missrate: 152
IOPs: 228 | ARC cache hit ratio: 30.0 % | Hitrate: 71 / Missrate: 157
IOPs: 8321 | ARC cache hit ratio: 90.0 % | Hitrate: 8072 / Missrate: 249
IOPs: 15550 | ARC cache hit ratio: 90.0 % | Hitrate: 15450 / Missrate: 100
IOPs: 11819 | ARC cache hit ratio: 90.0 % | Hitrate: 11683 / Missrate: 136
IOPs: 28630 | ARC cache hit ratio: 90.0 % | Hitrate: 28367 / Missrate: 263
IOPs: 40484 | ARC cache hit ratio: 90.0 % | Hitrate: 40409 / Missrate: 75
IOPs: 104501 | ARC cache hit ratio: 90.0 % | Hitrate: 103982 / Missrate: 519
IOPs: 164483 | ARC cache hit ratio: 90.0 % | Hitrate: 163997 / Missrate: 486
IOPs: 229729 | ARC cache hit ratio: 90.0 % | Hitrate: 228956 / Missrate: 773
IOPs: 236479 | ARC cache hit ratio: 90.0 % | Hitrate: 235886 / Missrate: 593
IOPs: 249232 | ARC cache hit ratio: 90.0 % | Hitrate: 248836 / Missrate: 396
IOPs: 259156 | ARC cache hit ratio: 90.0 % | Hitrate: 258968 / Missrate: 188
IOPs: 276099 | ARC cache hit ratio: 90.0 % | Hitrate: 275857 / Missrate: 242
IOPs: 249382 | ARC cache hit ratio: 90.0 % | Hitrate: 249287 / Missrate: 95
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;What does this result mean? The RAM size is 16 GB and the test data size is only 6 GB. If you just continue performing random I/O, eventually all data will be in RAM. I believe that here, you witness the moment when all data is in RAM and the already high IOPs goes through the roof (250K IOPS). However, I cannot explain the increase of the Missrate. &lt;/p&gt;</content><category term="ZFS"></category><category term="ZFS"></category></entry><entry><title>An affordable server platform based on server-grade hardware</title><link href="https://louwrentius.com/an-affordable-server-platform-based-on-server-grade-hardware.html" rel="alternate"></link><published>2013-12-13T12:00:00+01:00</published><updated>2013-12-13T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-12-13:/an-affordable-server-platform-based-on-server-grade-hardware.html</id><summary type="html">&lt;p&gt;&lt;strong&gt;Updated post (June 2014) &lt;a href="https://louwrentius.com/affordable-server-with-server-grade-hardware-part-ii.html"&gt;found HERE&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;There are some reasons why you should consider buying true server-grade harware when building a server, wether it's for home or business use. &lt;/p&gt;
&lt;p&gt;This is why I want to introduce you to the &lt;a href="http://www.supermicro.nl/products/motherboard/xeon/c202_c204/x9scm-f.cfm"&gt;Supermicro X9SCM-F motherboard&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img alt="motherboard" src="http://www.supermicro.nl/a_images/products/Xeon/C204/X9SCM-F_spec.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;This motherboard costs about $160 or €160, which …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;strong&gt;Updated post (June 2014) &lt;a href="https://louwrentius.com/affordable-server-with-server-grade-hardware-part-ii.html"&gt;found HERE&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;There are some reasons why you should consider buying true server-grade harware when building a server, wether it's for home or business use. &lt;/p&gt;
&lt;p&gt;This is why I want to introduce you to the &lt;a href="http://www.supermicro.nl/products/motherboard/xeon/c202_c204/x9scm-f.cfm"&gt;Supermicro X9SCM-F motherboard&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img alt="motherboard" src="http://www.supermicro.nl/a_images/products/Xeon/C204/X9SCM-F_spec.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;This motherboard costs about $160 or €160, which is more than a desktop grade motherboard, but you get quite a lot in return.&lt;/p&gt;
&lt;h3&gt;Remote KVM over IP&lt;/h3&gt;
&lt;p&gt;First of all, this motherboard has a separate network interface dedicated to an on-board Keyboard-Video-Mouse interface &lt;a href="http://www.supermicro.nl/products/nfo/IPMI.cfm"&gt;(IPMI)&lt;/a&gt;. This interface allows you to power the server on or off, enter the BIOS all through a web interface, through the network. You never need to be in the vicinity of your server unless you need to perform some hardware maintenance. &lt;/p&gt;
&lt;h3&gt;Support for ECC (Error-correcting Code)  RAM&lt;/h3&gt;
&lt;p&gt;All server grade hardware from manufactures like Supermicro, Dell or HP all ship their servers with ECC RAM. This type of memory is more expensive than regular RAM, but based on the name, you may guess the benefit: it detects and corrects memory errors. If you truly care about data integrity and availability, this is a recommended feature. &lt;/p&gt;
&lt;h3&gt;Support for both cheap or faster and more expensive processors&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Cheap processor with ECC RAM support&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The cheapest solution seems to be the Intel Pentium G2030 @ 2.00 GHz. For only 72 dollar or 52 Euro it can be yours and it supports ECC memory. Performance is not stellar but sufficient for most NAS builds. More proof that a build based on ECC does not have to be expensive.&lt;/p&gt;
&lt;p&gt;One step up the ladder is the &lt;a href="http://ark.intel.com/products/65693/"&gt;Intel Core i3-3220&lt;/a&gt;. It retails for about $124 or €125 and it supports ECC memory, something I didn't suspect of the i3 series. At 3.3Ghz it's quite fast and it is a dual-core processor with hyper-threading support. The processor also supports virtualisation (VT-x), but it does not support I/O virtualisation (VT-d). &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Faster processor with full virtualisation support&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;At $229 or €200 the &lt;a href="http://ark.intel.com/products/65732/"&gt;Intel Xeon E3-1230 V2&lt;/a&gt; cost you twice the price of the Core i3, but you get twice the processor cores (quad-core) with hyper-threading, turbo-boost up to 3.7 ghz and full virtualisation support (both VT-x and VT-d). For the money, you get a beast of a processor with ECC RAM support. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Overview of CPU's&lt;/em&gt;&lt;/p&gt;
&lt;table&gt;
&lt;tr&gt;&lt;td&gt;CPU&lt;/td&gt;&lt;td&gt;Passmark score&lt;/td&gt;&lt;td&gt;Price in Euro&lt;/td&gt;&lt;td&gt;Price in Dollars&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Intel Pentium G2030 @ 3.00GHz&lt;/td&gt;&lt;td&gt;3008&lt;/td&gt;&lt;td&gt;52 Euro&lt;/td&gt;&lt;td&gt;72 Dollar*&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Intel Core i3-3220 @ 3.30GHz&lt;/td&gt;&lt;td&gt;4226&lt;/td&gt;&lt;td&gt;97 Euro&lt;/td&gt;&lt;td&gt;125 Dollar&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Intel Xeon E3-1230 V2 @ 3.30GHz&lt;/td&gt;&lt;td&gt;8890&lt;/td&gt;&lt;td&gt;196 Euro&lt;/td&gt;&lt;td&gt;230 Dollar&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;

&lt;ul&gt;
&lt;li&gt;Dollars are from Newegg (* is estimate).&lt;/li&gt;
&lt;li&gt;Euros are including taxes.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Support for 32 GB RAM&lt;/h3&gt;
&lt;p&gt;32 GB is quite some RAM and the maximum amount supported by the free version of VMware ESXi (ESXi 5.5 does not have this limit I've been told). Basically, it supports entry-level virtualisation needs or web-application needs. The 32 GB RAM limit is one of the biggest constraints of this platform, but may be sufficient for most applications except for large virtualisation loads or big databases. &lt;/p&gt;
&lt;p&gt;32 GB is more than enough for everyone who likes to build a beefy storage server. &lt;/p&gt;
&lt;h3&gt;Expandability: 4 x PCI-e 8x (physical size) slots&lt;/h3&gt;
&lt;p&gt;This is probably why the &lt;a href="http://www.supermicro.nl/products/motherboard/xeon/c202_c204/x9scm-f.cfm"&gt;Supermicro X9SCM-F motherboard&lt;/a&gt; is so interesting for NAS or storage builders. It has four PCI-e slots in a 8x physical form factor. Two of them are PCI-e 2.0 4x in a physical 8x slot and the other two are true PCI-e 3.0 8x slots. But even a PCI-e 2.0 4x slot provides you with 4 x 500 MB/s = 2000 MB/s per slot. The 8x slots have 8 lanes at 985 MB/s each thus totalling almost 8 GB/s! Let's be honest: the PCI-e 2.0 4x provide more than enough bandwidth to power any disk controller or network interface.&lt;/p&gt;
&lt;p&gt;Because there are four slots, there are several possibilities. For example, you could populate them with four &lt;a href="http://www.redbooks.ibm.com/abstracts/tips0740.html"&gt;IBM M1015 HBA's&lt;/a&gt; providing you with 32 SAS/SATA ports for a total of 32 disks. With the six on-board SATA ports, you could theoretically connect 38 disks. &lt;/p&gt;
&lt;p&gt;If you put only 3 x &lt;a href="http://www.redbooks.ibm.com/abstracts/tips0740.html"&gt;IBM M1015 HBA's&lt;/a&gt; in the motherboard you will leave room for fast networking, like a quad-port gigabit network card. You can then put the four gigabit ports into &lt;a href="https://louwrentius.com/linux-network-interface-bonding-trunking-or-how-to-get-beyond-1-gbs.html"&gt;Linux interface bonding&lt;/a&gt; to achieve 400 MB/s network transfer speeds. &lt;/p&gt;
&lt;p&gt;&lt;img alt="motherboard" src="https://louwrentius.com/static/images/3xm1015-in-X9SCM-F.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;You could also consider to implement 10GB network cards (more expensive) or take a look into fibre channel or infiniband network connectivity. &lt;/p&gt;
&lt;p&gt;With those four PCI-e slots, you can come a very long way. &lt;/p&gt;
&lt;h3&gt;Two on-board Gigabit network interfaces&lt;/h3&gt;
&lt;p&gt;The motherboard supports two on-board network interfaces and you can use them to &lt;a href="https://louwrentius.com/linux-network-interface-bonding-trunking-or-how-to-get-beyond-1-gbs.html"&gt;bond&lt;/a&gt; them together for extra performance and/or fail-over. &lt;/p&gt;
&lt;p&gt;You could also use one interface for regular LAN traffic and the other one for iSCSI or other protocols. &lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;I believe the &lt;a href="http://www.supermicro.nl/products/motherboard/xeon/c202_c204/x9scm-f.cfm"&gt;Supermicro X9SCM-F motherboard&lt;/a&gt; is a very interesting all-round platform for any type of server and particularly interesting as a platform for storage servers. &lt;/p&gt;
&lt;p&gt;If you take the motherboard ($160), 8 GB RAM ($75) and entry level processor ($125) you get a server-grade platform for $360. The same platform with 16 GB RAM will cost you about $435. The prices in Euros will be about the same.&lt;/p&gt;</content><category term="Storage"></category><category term="storage"></category></entry><entry><title>Eztables: simple yet powerful firewall configuration for Linux</title><link href="https://louwrentius.com/eztables-simple-yet-powerful-firewall-configuration-for-linux.html" rel="alternate"></link><published>2013-11-16T12:00:00+01:00</published><updated>2013-11-16T12:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-11-16:/eztables-simple-yet-powerful-firewall-configuration-for-linux.html</id><summary type="html">&lt;p&gt;I've created and released &lt;a href="http://eztables.net"&gt;Eztables&lt;/a&gt; on Github. Anyone who ever has a need to setup a firewall on Linux may be interested in this project. &lt;/p&gt;
&lt;p&gt;It doesn't matter if you need to protect a laptop, server or want to setup a network firewall. Eztables supports it all.&lt;/p&gt;
&lt;p&gt;If you're not …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I've created and released &lt;a href="http://eztables.net"&gt;Eztables&lt;/a&gt; on Github. Anyone who ever has a need to setup a firewall on Linux may be interested in this project. &lt;/p&gt;
&lt;p&gt;It doesn't matter if you need to protect a laptop, server or want to setup a network firewall. Eztables supports it all.&lt;/p&gt;
&lt;p&gt;If you're not afraid to touch the command line and edit a text file, you may be quite pleased with Eztables. &lt;/p&gt;
&lt;p&gt;&lt;a href="http://eztables.net"&gt;Go check it out!&lt;/a&gt;&lt;/p&gt;</content><category term="Networking"></category><category term="Firewall"></category><category term="Networking"></category><category term="iptables"></category><category term="Security"></category></entry><entry><title>Script that shows smart values of all disks'</title><link href="https://louwrentius.com/script-that-shows-smart-values-of-all-disks.html" rel="alternate"></link><published>2013-10-05T01:00:00+02:00</published><updated>2013-10-05T01:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-10-05:/script-that-shows-smart-values-of-all-disks.html</id><summary type="html">&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Please use &lt;a href="https://github.com/louwrentius/showtools"&gt;this tool on github&lt;/a&gt; instead of this ancient script.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;So you have a Linux system with a lot of hard drives. If you want to quickly check on some key SMART values to determine the health of individual disks, you might be interested in this script.&lt;/p&gt;
&lt;p&gt;I wrote …&lt;/p&gt;</summary><content type="html">&lt;hr&gt;
&lt;p&gt;&lt;strong&gt;Please use &lt;a href="https://github.com/louwrentius/showtools"&gt;this tool on github&lt;/a&gt; instead of this ancient script.&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;So you have a Linux system with a lot of hard drives. If you want to quickly check on some key SMART values to determine the health of individual disks, you might be interested in this script.&lt;/p&gt;
&lt;p&gt;I wrote a small Python script called &lt;a href="https://louwrentius.com/files/showsmart"&gt;'showsmart'&lt;/a&gt; that displays key SMART values for all disks that support SMART. This has been tested on Ubuntu and Debian.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/showsmart.png"&gt;&lt;img alt="showsmart" src="https://louwrentius.com/static/images/showsmart.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;REALLOC. = Reallocated Sector Count &lt;/li&gt;
&lt;li&gt;PENDING = Current Pending Sector&lt;/li&gt;
&lt;li&gt;CRC ERR. = UDMA CRC ERROR&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This script only requires python and smartctl (smartmontools).&lt;/p&gt;
&lt;p&gt;I hope someone will find it useful.&lt;/p&gt;</content><category term="Storage"></category><category term="Storage"></category><category term="Python"></category><category term="Script"></category><category term="smartctl"></category></entry><entry><title>Script that shows relevant disk information</title><link href="https://louwrentius.com/script-that-shows-relevant-disk-information.html" rel="alternate"></link><published>2013-10-02T01:00:00+02:00</published><updated>2013-10-02T01:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-10-02:/script-that-shows-relevant-disk-information.html</id><content type="html">&lt;p&gt;I wrote a small Python script called &lt;a href="https://louwrentius.com/files/showdisks"&gt;'showdisks'&lt;/a&gt; that displays relevant information about any physical storage devices supported by hdparm. &lt;/p&gt;
&lt;p&gt;Information such as model and capacity are shown, but also controller and device path.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/showdisks01.png"&gt;&lt;img alt="showdisks" src="https://louwrentius.com/static/images/showdisks01.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This script only requires python and hdparm.&lt;/p&gt;
&lt;p&gt;I hope someone will find it useful.&lt;/p&gt;</content><category term="Storage"></category><category term="Storage"></category><category term="Python"></category><category term="Script"></category></entry><entry><title>Finding a good blu-ray player for Mac OS X</title><link href="https://louwrentius.com/finding-a-good-blu-ray-player-for-mac-os-x.html" rel="alternate"></link><published>2013-09-22T18:00:00+02:00</published><updated>2013-09-22T18:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-09-22:/finding-a-good-blu-ray-player-for-mac-os-x.html</id><summary type="html">&lt;p&gt;I find playing a Blu-ray movie on my Mac cumbersome. I've been using &lt;a href="http://www.plexapp.com"&gt;Plex&lt;/a&gt;, &lt;a href="http://xbmc.org"&gt;XBMC&lt;/a&gt; and &lt;a href="http://www.videolan.org/vlc/index.html"&gt;VLC&lt;/a&gt; but these free open-source products are all a usability nightmare. &lt;/p&gt;
&lt;p&gt;To play a Blu-ray movie, you have to perform these steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;right-click on the BDMV file&lt;/li&gt;
&lt;li&gt;choose 'show packet contents'&lt;/li&gt;
&lt;li&gt;go to the …&lt;/li&gt;&lt;/ol&gt;</summary><content type="html">&lt;p&gt;I find playing a Blu-ray movie on my Mac cumbersome. I've been using &lt;a href="http://www.plexapp.com"&gt;Plex&lt;/a&gt;, &lt;a href="http://xbmc.org"&gt;XBMC&lt;/a&gt; and &lt;a href="http://www.videolan.org/vlc/index.html"&gt;VLC&lt;/a&gt; but these free open-source products are all a usability nightmare. &lt;/p&gt;
&lt;p&gt;To play a Blu-ray movie, you have to perform these steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;right-click on the BDMV file&lt;/li&gt;
&lt;li&gt;choose 'show packet contents'&lt;/li&gt;
&lt;li&gt;go to the STREAM folder&lt;/li&gt;
&lt;li&gt;sort the files by size (from large to small)&lt;/li&gt;
&lt;li&gt;select the biggest m2ts file and open it in the appropriate player&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I got so fed-up with this process that I started searching for any product that just allows me to point it to a folder with Blu-ray content and friggin' play it. Fortunately, there is such a product for Mac OS X, it's called &lt;a href="http://www.macblurayplayer.com"&gt;Mac Blu-ray Player&lt;/a&gt; and it's a paid application. I paid 36 euro (48 dollar) including taxes. &lt;/p&gt;
&lt;p&gt;Those 36 euros are well spend. No it is not free, no it is not open-source, but I don't care. It is a good product. If you value your time I highly recommend buying this software. &lt;/p&gt;
&lt;p&gt;I don't have anything against free or open-source software, but I do have a grudge against software that is not user-friendly. If software is not easy to use, something you would expect from a media player, it's broken.&lt;/p&gt;
&lt;p&gt;Fortunately, you don't have to trust me on my word, you can download a free trial that seems fully functional, it only shows a trial message when playing a movie. &lt;/p&gt;</content><category term="Uncategorized"></category><category term="blu-ray"></category><category term="Mac OS X"></category></entry><entry><title>Linux: script that creates table of network interface properties</title><link href="https://louwrentius.com/linux-script-that-creates-table-of-network-interface-properties.html" rel="alternate"></link><published>2013-08-15T00:00:00+02:00</published><updated>2013-08-15T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-08-15:/linux-script-that-creates-table-of-network-interface-properties.html</id><summary type="html">&lt;p&gt;My server has 5 network interfaces and I wanted a quick overview of some properties. There may be an existing linux command for this but I couldn't find it so I quickly wrote my 
own script &lt;a href="/static/files/showinterfaces"&gt;(download)&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;This is the output:&lt;/p&gt;
&lt;p&gt;&lt;img alt="showinterfacesimage" src="/static/images/showinterfaces01.png" /&gt;&lt;/p&gt;
&lt;p&gt;The only requirement for this script is that you …&lt;/p&gt;</summary><content type="html">&lt;p&gt;My server has 5 network interfaces and I wanted a quick overview of some properties. There may be an existing linux command for this but I couldn't find it so I quickly wrote my 
own script &lt;a href="/static/files/showinterfaces"&gt;(download)&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;This is the output:&lt;/p&gt;
&lt;p&gt;&lt;img alt="showinterfacesimage" src="/static/images/showinterfaces01.png" /&gt;&lt;/p&gt;
&lt;p&gt;The only requirement for this script is that you have 'ethtool' installed. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Update 2013-08-17&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I recreated the script in python &lt;a href="/static/files/showifs"&gt;(download)&lt;/a&gt; so I can just dynamically format the table and not use
ugly hacks I used in the bash script.&lt;/p&gt;</content><category term="Networking"></category><category term="Linux"></category><category term="Networking"></category></entry><entry><title>How to compile HAProxy from source and setup a basic configuration</title><link href="https://louwrentius.com/how-to-compile-haproxy-from-source-and-setup-a-basic-configuration.html" rel="alternate"></link><published>2013-08-14T01:00:00+02:00</published><updated>2013-08-14T01:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-08-14:/how-to-compile-haproxy-from-source-and-setup-a-basic-configuration.html</id><summary type="html">&lt;p&gt;To learn more about HAProxy I decided to compile it from source and use it to load-balance traffic to louwrentius.com across two different web servers.&lt;/p&gt;
&lt;p&gt;I run HAProxy on a VPS based on Ubuntu 12.04 LTS. Let's dive right in.&lt;/p&gt;
&lt;p&gt;First, we need to download the source. Don't …&lt;/p&gt;</summary><content type="html">&lt;p&gt;To learn more about HAProxy I decided to compile it from source and use it to load-balance traffic to louwrentius.com across two different web servers.&lt;/p&gt;
&lt;p&gt;I run HAProxy on a VPS based on Ubuntu 12.04 LTS. Let's dive right in.&lt;/p&gt;
&lt;p&gt;First, we need to download the source. Don't copy/pased the exact code, you should download the latest version of HAProxy. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cd /usr/src
wget &amp;quot;http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.24.tar.gz&amp;quot;
tar xzf haproxy-1.4.24.tar.gz
cd haproxy-1.4.24
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Before you can compile software, you must make sure you have a working build-environment. With Ubuntu or Debian, you should run:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get install build-essential
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If you open the README file in the root directory, you will find some detailed instructions on how to compile HAProxy, which is really straight-forward. &lt;/p&gt;
&lt;h3&gt;Compiling HAProxy&lt;/h3&gt;
&lt;h2&gt;Best CPU performance&lt;/h2&gt;
&lt;p&gt;The manual states that by default, it will compile HAProxy with no CPU-specific optimisations. To enable CPU-specific optimisations, you need to use the 'native' option.&lt;/p&gt;
&lt;p&gt;The extra argument we are supplying to 'make' wil be:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;CPU=native
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Libpcre support&lt;/h2&gt;
&lt;p&gt;It recommends to compile HAproxy with libpcre as it provides way better performance than other libc PCRE implementations. You need to install libpcre like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get install libpcre3-dev
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The extra argument we are supplying to 'make' wil be:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;USE_PCRE=1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Splicing support&lt;/h2&gt;
&lt;p&gt;A Linux-specific feature is support for the splice() system call. This system call allows data to be moved between file descriptors within kernel space, not touching user space. It entirely depends on your setup if this feature will be of any use to you. As splicing can be disabled within the configuration file of HAProxy, I would recommend compiling HAProxy with support for splicing. &lt;/p&gt;
&lt;p&gt;The extra argument we are supplying to 'make' wil be:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;USE_LINUX_SPLICE=1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Transparent mode support&lt;/h2&gt;
&lt;p&gt;I &lt;a href="http://blog.loadbalancer.org/configure-haproxy-with-tproxy-kernel-for-full-transparent-proxy/"&gt;learned&lt;/a&gt; that HAProxy also supports a transparent mode where it seems to 'spoof' the client IP-address to the backend servers. This way, the backend servers see the actual client IP-address, not the IP-address of the HAProxy load-balancer(s).&lt;/p&gt;
&lt;p&gt;For this setup to work, you need additional firewall rules and meet some routing requirements. I'm not sure why this would be important and the linked article also mentions a work-around where an additional HTTP-header is used: x-forwarded-for.&lt;/p&gt;
&lt;p&gt;I found &lt;a href="http://www.cyberciti.biz/faq/nginx-extract-the-clients-real-ip-from-x-forwarded-for-header/"&gt;this&lt;/a&gt; article about how to configure lighttpd to log the x-forwarded-for header. &lt;a href="http://wiki.nginx.org/HttpRealipModule"&gt;Here&lt;/a&gt; are some instructions for Ngnix.&lt;/p&gt;
&lt;p&gt;The extra argument we are supplying to 'make' wil be:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;USE_LINUX_TPROXY=1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Encrypted password support&lt;/h2&gt;
&lt;p&gt;It's possible to limit access to HAProxy features (like statistics) to specific users and their passwords. These passwords can be stored in plain-text or as a (more secure) hash of the password, &lt;a href="http://code.google.com/p/haproxy-docs/wiki/Userlists"&gt;using crypt&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;The extra argument we are supplying to 'make' wil be:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;USE_LIBCRYPT=1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Compiling HAproxy&lt;/h2&gt;
&lt;p&gt;If we would use all discussed options, our Make command would look like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;make TARGET=custom CPU=native USE_PCRE=1 USE_LIBCRYPT=1 USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;Installing HAproxy&lt;/h2&gt;
&lt;p&gt;By default, HAProxy is installed in /usr/local/haproxy with the following command:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;make install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If you want to start HAProxy at boot time, you need a startup script. HAProxy does provide a startup script for Redhat-based distro's, but not for Debian-based distros. &lt;/p&gt;
&lt;p&gt;HAProxy is also available pre-compiled as an Ubuntu or Debian package. These packages also contain a startup script. I used such a script and modified it to work with the HAProxy version I compiled from source. Basically, I only altered some paths, but you can find it &lt;a href="https://louwrentius.com/files/haproxy"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Configuration&lt;/h3&gt;
&lt;p&gt;HAProxy is very versatile and the actual configuration will entirely depend on your specific needs. I will document some basic scenario's with some examples. &lt;/p&gt;
&lt;p&gt;HAProxy has many configuration options, but don't worry, those are often &lt;a href="http://cbonte.github.io/haproxy-dconv/configuration-1.4.html"&gt;well-documented&lt;/a&gt;. &lt;/p&gt;
&lt;h2&gt;Scenario 1: Load-balancing&lt;/h2&gt;
&lt;p&gt;In this scenario, we have one load balancer based on HAProxy and it's goal is to load-balance traffic across two backend HTTP-servers.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;global
    daemon
    user haproxy
    group haproxy
    chroot /home/haproxy
    maxconn 256

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

frontend http-in
    bind *:80
    default_backend servers

backend servers
    balance roundrobin  
    server ws01 1.1.1.1:80 
    server ws02 1.1.1.2:80
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Reading the global section, we learn that HAProxy should run as a daemon, that it should run as a specific system user and thus drop all privileges after startup. It also should chroot to /home/haproxy, a directory which should be empty and not writable by the HAProxy user or group. HAProxy will permit at most 256 simultaneous connections. &lt;/p&gt;
&lt;p&gt;The defaults section learns us that we are running in HTTP mode. HAProxy can load-balance any TCP-traffic. In HTTP mode, it can understand and read HTTP header information and apply different actions, allowing for more control. &lt;/p&gt;
&lt;p&gt;Now we encounter the interesting part. The default_backend keyword shows that all traffic entering on TCP-port 80 should be directed to the backend 'servers'. The 'backend' section contains the actual backend servers that will be able to handle traffic. The load-balancing algorithm used is round-robin: every web server is used in turn. Visitor 1 hits webserver 1. Visitor 2 hits webserver 2. Visitor 3 hits webserver 1, and so on.&lt;/p&gt;
&lt;h2&gt;Scenario 2: Fail-over&lt;/h2&gt;
&lt;p&gt;In scenario 1, we only discussed load-balancing. However, if one of the servers becomes unavailable, users will be facing error-messages generated by HAProxy. This is often undesired, we want HAProxy to check the status of the backend servers and direct traffic only to servers that are available. HAProxy should not forward clients to backend servers that are not responsive.&lt;/p&gt;
&lt;p&gt;This desired behaviour requires a few extra options within the 'backend' section. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;backend servers
    balance roundrobin
    option httpchk
    server ws01 1.1.1.1:80 check inter 4000
    server ws02 1.1.1.2:80 check inter 4000
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This configration makes HAProxy check both backend webservers for every 4000ms (4 seconds) for availability. By default, HAProxy only tests if it's possible to make a TCP-connection with the webserver. Ofcourse, this will not always tell you if a webserver is properly operational. This is why 'option httpchk' is added to the configuration. HAProxy will then connect to the backend webserver and issue an HTTP OPTIONS-request, which will be a better gauge to determine if the web server service is active. With additional options you can make HAProxy request specific URIs. &lt;/p&gt;
&lt;h3&gt;Additional configuration options&lt;/h3&gt;
&lt;h2&gt;Logging&lt;/h2&gt;
&lt;p&gt;HAProxy supports logging to Syslog. You can configure it to log to the local syslog daemon, or to a centralised log server. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;global
    log 127.0.0.1 local0 debug
    log-tag haproxy
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;All log messages are prefixed with 'haproxy'. They are sent to localhost and the verbosity is 'debug'. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;defaults
    log global

frontend http-in
    log global
    option httplog clf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Option httplog clf makes HAProxy log in a similar log format as Apache. A tool like AWstats can then easily parse the log and generate some statistics. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;backend servers
    log global
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The 'backend' section will only log messages related to the availability of backend servers. Actual request-logging is performed through the 'frontent' section.&lt;/p&gt;
&lt;h2&gt;Prioritising backend servers&lt;/h2&gt;
&lt;p&gt;Some backend servers may have more performance and bandwidth available then others. Using the 'weight' parameter, you can make sure that certain services get more traffic then others. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;backend servers
    balance roundrobin
    option httpchk
    server ws01 1.1.1.1:80 check inter 4000 weight 10
    server ws02 1.1.1.2:80 check inter 4000 weight 20
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this example, webserver ws02 will receive twice as many request as webserver ws01. But the load will still be balanced across both webservers.&lt;/p&gt;
&lt;h2&gt;Enabling statistics&lt;/h2&gt;
&lt;p&gt;HAProxy has a build-in webpage that shows performance metrics and the status of backend hosts. This webpage is not enabled by default. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;defaults
    stats enable
    stats auth username:password
    stats uri /mystatspage
    stats refresh 5s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please note that with this configuration, the statistics page may be accessible from the internet. As the page may provide some information about your environment that could be of benefit to attackers, it's wise to configure strong passwords and to configure a uri that is not easy to predict/guess. Beware that the password is transmitted in clear-text!&lt;/p&gt;
&lt;p&gt;For security reasons I would recommend to have the statistics page only accessible from within your own network and not accessible directly from the internet in any way. &lt;/p&gt;
&lt;p&gt;In this next scenario I assume that the load balancer has two network interfaces and is connected to both the internet and an internal 'backend' network that uses IP-addresses in the 10.x.x.x range.&lt;/p&gt;
&lt;p&gt;For security reasons, I would bind the statistics web page to the 'backend' interface, so it will never be accessible through the internet.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;listen HAProxy-stats 10.0.10.10:81
    stats enable
    stats auth user:pass
    stats uri /stats
    stats refresh 5s
    stats show-legends
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Final words&lt;/h3&gt;
&lt;p&gt;This basic tutorial should leave you with an up-and-running HAProxy. There are some topics I did not discuss, like handling of SSL-traffic. HAProxy 1.4 does not support SSL but version 1.5 will have native SSL-support. In the mean time, you will need to use Ngnix or 'stud' for SSL-offloading.  &lt;/p&gt;</content><category term="Networking"></category><category term="load-balancing"></category><category term="HAProxy"></category><category term="high-availability"></category></entry><entry><title>Redhat explains why chroot is not a security feature</title><link href="https://louwrentius.com/redhat-explains-why-chroot-is-not-a-security-feature.html" rel="alternate"></link><published>2013-08-07T13:00:00+02:00</published><updated>2013-08-07T13:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-08-07:/redhat-explains-why-chroot-is-not-a-security-feature.html</id><summary type="html">&lt;p&gt;I came across this Redhat security &lt;a href="http://securityblog.redhat.com/2013/03/27/is-chroot-a-security-feature/" title="Redhat blog post"&gt;blog post&lt;/a&gt; that explains why the chroot command has it's uses, but it isn't magic security pixie dust. Running an application from within a chrooted jail or just on a well-configured system would result in the same level of security.&lt;/p&gt;
&lt;p&gt;Josh Bressers:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Putting a …&lt;/p&gt;&lt;/blockquote&gt;</summary><content type="html">&lt;p&gt;I came across this Redhat security &lt;a href="http://securityblog.redhat.com/2013/03/27/is-chroot-a-security-feature/" title="Redhat blog post"&gt;blog post&lt;/a&gt; that explains why the chroot command has it's uses, but it isn't magic security pixie dust. Running an application from within a chrooted jail or just on a well-configured system would result in the same level of security.&lt;/p&gt;
&lt;p&gt;Josh Bressers:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Putting a regular user in a chroot() will prevent them from having access to the rest of the system. This means using a chroot is not less secure, but it is not more secure either. If you have proper permissions configured on your system, you are no safer inside a chroot than relying on system permissions to keep a user in check. Of course you can make the argument that everyone makes mistakes, so running inside a chroot is safer than running outside of one where something is going to be misconfigured. This argument is possibly true, but note that setting up a chroot can be far more complex than configuring a system. Configuration mistakes could lead to the chroot environment being less secure than non-chroot environments.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In the past I've tried to setup a chroot for an application and it was a pain. If you want to do it well, it will take quite some effort and every application has it's own requirements. But why spend all this effort? &lt;/p&gt;
&lt;p&gt;Josh continues:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;it may not be possible to break out of the chroot, but the attacker can still use system resources, such as for sending spam, gaining local network access, joining the system to a botnet, and so on.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A chroot jail hides the rest of the 'real' file system. But the file system is just one part of the security equation: an attacker that compromised the chrooted application can still execute arbitrary code. Not as the root user, fair enough, but does it really hinder the attacker? The attacker already gained a stepping stone to pivot into the rest of the network&lt;sup id="fnref:1"&gt;&lt;a class="footnote-ref" href="#fn:1"&gt;1&lt;/a&gt;&lt;/sup&gt;. As a non-privileged user, the attacker can try to exploit local kernel vulnerabilities to gain root access or stage attacks through the network on other hosts. &lt;/p&gt;
&lt;p&gt;If you run some kind of forum or bulletin board, it is probably more likely that this software will be compromised than the web server itself. And the result is often the same: arbitrary code execution with the privileges of the web server software. So the attacker controls the application and thus all it's content, including email addresses and password(hashes). &lt;/p&gt;
&lt;p&gt;A chrooted jail does not provide any additional security in this scenario. It may be a bit more difficult to access the rest of the file system, but if the attacker has access as an unprivileged user and file system permissions are set properly, is there a benefit?&lt;/p&gt;
&lt;p&gt;I believe it is more wise to invest your time configuring proper file system privileges and propagate them through &lt;a href="https://puppetlabs.com/"&gt;puppet&lt;/a&gt;, &lt;a href="http://www.opscode.com/chef/"&gt;chef&lt;/a&gt; or &lt;a href="http://www.ansibleworks.com/"&gt;ansible&lt;/a&gt;. And run some scripts to audit/validate file system privileges.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Update&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If applications support chroot, it might still be wise to enable it. It's often very easy to configure and it will probably delay an attacker. &lt;/p&gt;
&lt;div class="footnote"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;If you implemented network segmentation properly and have a sane firewall, the impact could be limited.&amp;#160;&lt;a class="footnote-backref" href="#fnref:1" title="Jump back to footnote 1 in the text"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;</content><category term="Security"></category><category term="Security"></category><category term="Chroot"></category></entry><entry><title>overview of open-source load balancers</title><link href="https://louwrentius.com/overview-of-open-source-load-balancers.html" rel="alternate"></link><published>2013-08-07T12:00:00+02:00</published><updated>2013-08-07T12:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-08-07:/overview-of-open-source-load-balancers.html</id><summary type="html">&lt;p&gt;I was looking at open-source load balancing software and it seems that there isn't a nice overview except from &lt;a href="http://www.inlab.de/articles/free-and-open-source-load-balancing-software-and-projects.html"&gt;this website&lt;/a&gt;, although many of the listed projects seem dead. &lt;/p&gt;
&lt;p&gt;I've made a selection of products that seem to be relevant. The biggest problem with open-source software is that projects are …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I was looking at open-source load balancing software and it seems that there isn't a nice overview except from &lt;a href="http://www.inlab.de/articles/free-and-open-source-load-balancing-software-and-projects.html"&gt;this website&lt;/a&gt;, although many of the listed projects seem dead. &lt;/p&gt;
&lt;p&gt;I've made a selection of products that seem to be relevant. The biggest problem with open-source software is that projects are abandoned or unmaintained.
So I created this table and added a column 'last product update' which gives you a feel for how active the project is.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Product&lt;/th&gt;
&lt;th&gt;Last product update&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://nginx.org/"&gt;ngnix&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2013 July&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://www.lighttpd.net"&gt;Lighttpd&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;November 2012&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://haproxy.1wt.eu/"&gt;HAproxy&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2013 June&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://www.apsis.ch/pound/"&gt;Pound&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2011 December&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://www.varnish-cache.org/"&gt;Varnish&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2013 June&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://www.zenloadbalancer.com"&gt;Zen Load Balancer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2013 February&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html"&gt;Apache&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2013 July&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://www.linuxvirtualserver.org/software/ktcpvs/ktcpvs.html"&gt;Linux Virtual Server&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Unmaintained?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://sourceforge.net/projects/xlb/"&gt;XLB HTTP Load Balancer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2009 February&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://sourceforge.net/apps/mediawiki/octopuslb/index.php?title=Main_Page"&gt;Octopus Load Balancer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2011 November&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="http://wiki.squid-cache.org/SquidFaq/ReverseProxy"&gt;Squid&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;2013 July&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;em&gt;Date of measurement: August 2013&lt;/em&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;I currently don't have hands-on experience with these products. Some of those products are briefly discussed at &lt;a href="http://erikwebb.net/blog/open-source-software-load-balancers"&gt;this blog&lt;/a&gt; - worth a visit.&lt;/p&gt;
&lt;p&gt;There are many more products but most seem to be abandoned years ago. If you feel there are more products that are noteworthy but not in this list, feel free to contact me or comment about it.&lt;/p&gt;
&lt;p&gt;It seems that the top-3 web servers like ngnix, Apache and Lighttpd all have support for load balancing. It depends on your needs, time and knowledge if you want to invest in other products or stick with the web server software you know.&lt;/p&gt;
&lt;p&gt;At this &lt;a href="http://www.gossamer-threads.com/lists/nanog/users/140879"&gt;location&lt;/a&gt; some people are talking about the pro's and con's of commercial off-the-shelve products vs. home-grown open-source solutions. &lt;/p&gt;</content><category term="Networking"></category><category term="load-balancing"></category></entry><entry><title>I switched my blog from Blogofile to Pelican</title><link href="https://louwrentius.com/i-switched-my-blog-from-blogofile-to-pelican.html" rel="alternate"></link><published>2013-08-06T03:00:00+02:00</published><updated>2013-08-06T03:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-08-06:/i-switched-my-blog-from-blogofile-to-pelican.html</id><summary type="html">&lt;p&gt;This blog is a static website, which makes it fast, simple and secure. It was generated by &lt;a href="http://www.blogofile.com/"&gt;Blogofile&lt;/a&gt; but I switched to &lt;a href="http://blog.getpelican.com/"&gt;Pelican&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Blogofile has seen almost no updates over the years and I consider the project dead. Realising that blogofile is dead, I decided to look around for different …&lt;/p&gt;</summary><content type="html">&lt;p&gt;This blog is a static website, which makes it fast, simple and secure. It was generated by &lt;a href="http://www.blogofile.com/"&gt;Blogofile&lt;/a&gt; but I switched to &lt;a href="http://blog.getpelican.com/"&gt;Pelican&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Blogofile has seen almost no updates over the years and I consider the project dead. Realising that blogofile is dead, I decided to look around for different open source static blog generators. &lt;/p&gt;
&lt;p&gt;There are many other static blog generators than pelican. But Pelican is well-documented, is based on Python, is very actively maintained (good track record) and supports all features that I wanted. Some of those features are  Atom/RSS, Disqus and Google analytics support. &lt;/p&gt;
&lt;p&gt;My blog posts are written using Markdown. This makes it very easy to migrate away from Blogofile to Pelican, as Pelican also supports Markdown. Blogofile uses a different header format not recognised by Pelican, so you have to search and replace some key words in all your files before Pelican can actually generate your new website.&lt;/p&gt;
&lt;p&gt;I wrote this horrible bash shell for-loop to process all my blog posts:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;    :::bash
    for x in *.md
    do
        TITLE=`grep -i &amp;quot;title:&amp;quot; &amp;quot;$x&amp;quot;`
        TITLEFIXED=`echo $TITLE | sed s/\&amp;quot;//g`
        DATE=`grep -i &amp;quot;date:&amp;quot; &amp;quot;$x&amp;quot;`
        DATEFIXED=`echo $DATE | sed &amp;quot;s/\//-/g&amp;quot; | cut -d &amp;quot;:&amp;quot; -f 1,2,3`
        CATEGORY=`grep -i &amp;quot;categories:&amp;quot; &amp;quot;$x&amp;quot;`
        CATEGORYFIXED=`echo $CATEGORY | sed s/\&amp;quot;//g | sed s/categories/category/g | cut -d &amp;quot;,&amp;quot; -f 1 | /usr/local/bin/sed -e &amp;quot;s/\b\(.\)/\u\1/g&amp;quot;`
        echo &amp;quot;$TITLEFIXED&amp;quot; &amp;gt; tmp.txt
        echo &amp;quot;$CATEGORYFIXED&amp;quot; &amp;gt;&amp;gt; tmp.txt
        echo &amp;quot;$DATEFIXED&amp;quot; &amp;gt;&amp;gt; tmp.txt
        grep -v &amp;quot;title:&amp;quot; &amp;quot;$x&amp;quot; | grep -v -e &amp;#39;---&amp;#39; | grep -v -i &amp;quot;date:&amp;quot; | grep -v -i &amp;quot;categories:&amp;quot; &amp;gt;&amp;gt; tmp.txt
        mv tmp.txt &amp;quot;$x&amp;quot;
    done
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Notice how Build-in syntax highlighting of Pelican applies nice colors to this horrible code. Regarding this horrible code: I had to use GNU sed as the Mac OS X sed did not support the regular expression I used. &lt;/p&gt;
&lt;p&gt;To enable comments for my blog posts I always used &lt;a href="http://disqus.com/"&gt;Disqus&lt;/a&gt; with Blogofile. Pelican generates web pages in a different way compared to blogofile, so all old pages need to be redirected to the new location. I used the redirect functionality of &lt;a href="http://redmine.lighttpd.net/projects/1/wiki/docs_modredirect"&gt;Lighttpd&lt;/a&gt; to redirect all existing pages to the new location. &lt;/p&gt;
&lt;p&gt;The cool thing is that Disqus has a tool called "Redirect Crawler". If you have configured 301 "permanent redirects" for all pages and run this tool, Disqus will automatically update all existing links to the new locations, so your comments are migrated to the new web pages. &lt;/p&gt;
&lt;p&gt;Furthermore, I've implemented a Pelican plugin called &lt;a href="https://github.com/jrarseneau/pelican-titlecase"&gt;titlecase&lt;/a&gt; which capitalizes the first letter of words in the title of your article. It's just that I think it looks better.&lt;/p&gt;
&lt;p&gt;I think I'm really happy with Pelican.&lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Using iSCSI with time machine and Super Duper</title><link href="https://louwrentius.com/using-iscsi-with-time-machine-and-super-duper.html" rel="alternate"></link><published>2013-07-21T16:00:00+02:00</published><updated>2013-07-21T16:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-07-21:/using-iscsi-with-time-machine-and-super-duper.html</id><summary type="html">&lt;p&gt;In the past, as a Mac user, I've used separate external drives for Time Machine backups and Super Duper clones but I'm not happy with that. External hard drives make noise and create clutter.&lt;/p&gt;
&lt;p&gt;I'd like to move away all my storage from my living room (or home office) and …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In the past, as a Mac user, I've used separate external drives for Time Machine backups and Super Duper clones but I'm not happy with that. External hard drives make noise and create clutter.&lt;/p&gt;
&lt;p&gt;I'd like to move away all my storage from my living room (or home office) and put it in another room or even closet. &lt;/p&gt;
&lt;p&gt;A NAS may help with that but a NAS does not solve all problems. The main problem being the reliability of network-based Time Machine backups. Those NAS devices pretend to be Time Capsules, but there's always the risk that Apple breaks compatibility with a future update. &lt;/p&gt;
&lt;p&gt;&lt;img alt="qnap nas" src="https://louwrentius.com/static/images/qnapnas01.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;From my experience, Time Machine backups are only 100% reliable with local attached storage - like external hard drives. &lt;/p&gt;
&lt;p&gt;Now there is a cool technology called iSCSI. It's basically a storage protocol tunneled through your home LAN network instead of a USB / Firewire or Thunderbolt cable. Most NAS devices support iSCSI and allow you to carve out some local NAS storage and present it to your computer through the network as if it was just local storage. Since iSCSI uses your Gigabit network as a transport, you can achieve transfer speeds of around ~110 MB/s easily, which should suit most needs*.&lt;/p&gt;
&lt;p&gt;This is very cool, because you can export entire hard drives through the network to your computer. Your computer does not see the difference between an external USB hard drive and a hard drive exported through your NAS to your computer. iSCSI is totally transparent from the perspective of the operating system.&lt;/p&gt;
&lt;p&gt;This trick allows you to create &lt;em&gt;bootable&lt;/em&gt; Super Duper clones of your boot drive through the network. I would just hook up an external USB drive to my NAS and export it through iSCSI.&lt;/p&gt;
&lt;p&gt;In case of an emergency - when your boot drive dies - you can boot from this external hard drive. Just disconnect it from your NAS and hook it up to your Mac. &lt;/p&gt;
&lt;p&gt;Because hard drives attached through iSCSI are seen as normal storage, you can also encrypt them with the Apple build-int whole-drive (or whole-partition) encryption. &lt;/p&gt;
&lt;p&gt;Now there is one caveat. Mac OS X does not natively support iSCSI, it has no native iSCSI initiator (client). In contrast, Windows 7 does have a very good iSCSI initiator. I think it's a shame, but Mac users must buy an iSCSI initiator from either:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="http://www.studionetworksolutions.com/globalsan-iscsi-initiator/"&gt;GlobalSAN for $89&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.attotech.com/products/product.php?scat=17&amp;amp;sku=INIT-MAC0-001"&gt;Atto for $195&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I've only used the GlobalSAN iSCSI initiator and it seems to work fine. I 
believe that $89 is well worth the money: all your storage tucked away from your home office or living room.&lt;/p&gt;
&lt;p&gt;Another caveat is that iSCSI requires reliable networking or otherwise there is a possible risk of data corruption, so I would not advice using iSCSI over a wireless network connection, although it is possible.&lt;/p&gt;
&lt;p&gt;For the most popular NAS vendors, I've added some tutorials on how to setup iSCSI.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="http://www.synology.nl/support/tutorials_show.php?q_id=468"&gt;Synology&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.qnap.com/index.php?lang=en&amp;amp;sn=2698"&gt;QNAP&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.thecus.com/download/howtoguide/HowtoCreateaniSCSITargetonThecusNAS.pdf"&gt;Thecus&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;P.S.
The GlobalSAN iSCSI initiator does support sleep and hibernate, in contrast to what some tutorials may tell you.&lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>Don't use cloud services if you care about secrecy of your data</title><link href="https://louwrentius.com/dont-use-cloud-services-if-you-care-about-secrecy-of-your-data.html" rel="alternate"></link><published>2013-06-30T16:00:00+02:00</published><updated>2013-06-30T16:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-06-30:/dont-use-cloud-services-if-you-care-about-secrecy-of-your-data.html</id><summary type="html">&lt;p&gt;When you use cloud services, you are storing your data on &lt;a href="http://www.loper-os.org/?p=44"&gt;other people's hard drives&lt;/a&gt;.
The moment you put your data within a cloud service, that data is no longer under your control. You don't know who will access that data. Secrecy is lost.&lt;/p&gt;
&lt;p&gt;Instead of using services like Gmail …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When you use cloud services, you are storing your data on &lt;a href="http://www.loper-os.org/?p=44"&gt;other people's hard drives&lt;/a&gt;.
The moment you put your data within a cloud service, that data is no longer under your control. You don't know who will access that data. Secrecy is lost.&lt;/p&gt;
&lt;p&gt;Instead of using services like Gmail you may opt to setup some virtual private server and run your own email server, but that doesn't change a thing. The cloud provider controls the hardware, they have access to every bit you store on their platform. &lt;/p&gt;
&lt;p&gt;If you encrypt the hard drive of your VPS you need to enter the encryption password every time you reboot your VPS. And how can you remotely type in the password? On the VPS console, a piece of software written by and under control of your cloud provider. They can snoop on every character you enter. &lt;/p&gt;
&lt;p&gt;This may all sound far-fetched but it's about the principle of how things work. If you store unencrypted data on hardware that is not owned by you and under your physical control, that data cannot be trusted to stay secret.&lt;/p&gt;
&lt;p&gt;If you care about the secrecy of your data, you should &lt;em&gt;never&lt;/em&gt; store it with a cloud provider or any other third party. &lt;/p&gt;
&lt;p&gt;I believe that the price you have to pay for any decent secrecy of your data is to run your own physical server. This is way more expensive in terms of time and money than using a cloud service, so it's up to you if it's worth it. &lt;/p&gt;
&lt;p&gt;Although your own server will probably prevent your data being souped up with dragnet government surveillance, it will still be difficult if not impossible to protect you from a targeted investigation by a government agency. &lt;/p&gt;
&lt;p&gt;A government agency can obtain physical access to your server and physical access is often the deathblow to any secrecy / security. Even if you implement encryption in the right manner, you are only decreasing the chance of their success of accessing your data, you are not eliminating their chances.&lt;/p&gt;
&lt;p&gt;And in the end, a &lt;a href="https://xkcd.com/538/"&gt;$5 wrench&lt;/a&gt; will probably do wonders for them. It seems that it even does &lt;a href="https://defuse.ca/truecrypt-plausible-deniability-useless-by-game-theory.htm"&gt;wonders against encrypted hidden volumes&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;But there may still be a small benefit. If a government agency requires a cloud service provider to hand over your data, they can do so without your knowledge. A gag order will prohibit the cloud provider from informing you. However, if the servers are your own and are located within a building you own, either privately or as a company, you are at least aware of what's happening. That may or may not be relevant to you, that's up to you to decide.&lt;/p&gt;</content><category term="Security"></category></entry><entry><title>Why I believe the new Mac Pro won't be a great machine for gaming</title><link href="https://louwrentius.com/why-i-believe-the-new-mac-pro-wont-be-a-great-machine-for-gaming.html" rel="alternate"></link><published>2013-06-23T00:00:00+02:00</published><updated>2013-06-23T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-06-23:/why-i-believe-the-new-mac-pro-wont-be-a-great-machine-for-gaming.html</id><summary type="html">&lt;p&gt;In &lt;a href="http://atp.fm/episodes/18-aluminum-colored-aluminum"&gt;Accidental Tech Podcast episode 18&lt;/a&gt; (love the show), I learned that John Siracusa was thinking about buying a new Mac Pro for gaming.&lt;/p&gt;
&lt;p&gt;I believe that gaming on the new Mac Pro will be a mediocre experience.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Driver support:&lt;/em&gt; as John mentioned himself, the video cards are 'professional GPUs' …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In &lt;a href="http://atp.fm/episodes/18-aluminum-colored-aluminum"&gt;Accidental Tech Podcast episode 18&lt;/a&gt; (love the show), I learned that John Siracusa was thinking about buying a new Mac Pro for gaming.&lt;/p&gt;
&lt;p&gt;I believe that gaming on the new Mac Pro will be a mediocre experience.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Driver support:&lt;/em&gt; as John mentioned himself, the video cards are 'professional GPUs' used in workstations for computing, CAD etc. These cards and especially
their drivers under Windows are not geared towards gaming performance.&lt;/p&gt;
&lt;p&gt;The performance and quality of Mac OS X drivers may be improved dramatically over the past years, but if you like to play games exclusively under Windows,
you will probably be disappointed when you switch to bootcamp.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Crossfire support:&lt;/em&gt; you have these two ridiculously fast GPU's and for years on the PC platform, you can stack them together to achieve insane performance
(SLI / Crossfire).&lt;/p&gt;
&lt;p&gt;Mac OS X does not seem to support crossfire (or SLI). If you can't benefit from crossfire, you're paying a lot of money for a machine with one idle but very
expensive videocard. That sounds like a ridiculous waste of good money.&lt;/p&gt;
&lt;p&gt;Assuming that the hardware supports crossfire, if you would run Windows, then you may hit the driver issues associated with pro cards.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Upgrading:&lt;/em&gt; upgrading can be cost-efficient as most games are GPU not CPU bound, not an option for the Mac Pro. Even the new Thunderbolt interface does not have sufficient bandwidth to hook up one or more external high-performance videocards, aside from the fact that this will be insanely expensive.&lt;/p&gt;
&lt;p&gt;If you really needs the horsepower of a Mac Pro for other purposes than for gaming, sure it's the fastest mac you can get, but otherwise, I believe there's a better deal to be had.&lt;/p&gt;
&lt;p&gt;We don't now what the new Mac Pro will cost in a configuration suitable for gaming, but it will be 'a lot'. I believe that for one new Mac Pro, you will also be able to buy:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;a mac Mini with reasonable specs (it's easy to replace memory and storage); &lt;/li&gt;
&lt;li&gt;a high-quality 27" display @ 2560x1440 &lt;/li&gt;
&lt;li&gt;a ridiculously fast PC that will outperform the new Mac Pro when it comes to gaming.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I know that some Mac users don't like the idea of having a PC at home, but if you are into PC gaming, there's no other choice in my opinion if you want a good
gaming experience.&lt;/p&gt;
&lt;p&gt;I had high hopes for my 27" iMac (2011) but gaming performance anno 2013 is just mediocre at best. And I can't use it as an external display with a PC, only with a Mac.&lt;/p&gt;
&lt;p&gt;So I sold my 27" iMac and used the money to buy a separate 27" display and Mac Mini. I also ordered a 'ridiculously fast PC' which I hope will allow me to play all modern games on max quality settings for now and the upcoming year. And if required, I can swap out the dual GPUs and replace them with something better over time, if I need to.&lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Improving iSCSI Native Multi Pathing Round Robin performance</title><link href="https://louwrentius.com/improving-iscsi-native-multi-pathing-round-robin-performance.html" rel="alternate"></link><published>2013-05-27T00:00:00+02:00</published><updated>2013-05-27T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-05-27:/improving-iscsi-native-multi-pathing-round-robin-performance.html</id><summary type="html">&lt;p&gt;10 Gb ethernet is still quite expensive. You not only need to buy appropriate NICS, but you must also upgrade your network hardware as well. You may even need to replace existing fiber optic cabling if it's not rated for 10 Gbit. &lt;/p&gt;
&lt;p&gt;So I decided to still just go for …&lt;/p&gt;</summary><content type="html">&lt;p&gt;10 Gb ethernet is still quite expensive. You not only need to buy appropriate NICS, but you must also upgrade your network hardware as well. You may even need to replace existing fiber optic cabling if it's not rated for 10 Gbit. &lt;/p&gt;
&lt;p&gt;So I decided to still just go for plain old 1 Gbit iSCSI based on copper for our backup SAN. After some research I went for the HP MSA P2000 G3 with dual 1 Gbit iSCSI controllers.&lt;/p&gt;
&lt;p&gt;Each controller has 4 x 1 Gbit ports, so the box has a total of 8 Gigabit ports. 
This is ideal for redundancy, performance and cost. This relatively cheap SAN does support active/active mode, so both controllers can share the I/O load. &lt;/p&gt;
&lt;p&gt;The problem with storage is that a single 1 Gbit channel is just not going to cut it when you need to perform bandwidth intensive tasks, such as moving VMs between datastores (within VMware).&lt;/p&gt;
&lt;p&gt;Fortunately, iSCSI Multi Pathing allows you to do basically a RAID 0 over multiple network cards, combining their performance. So four 1 Gbit NICS can provide you with 4 Gbit of actual storage throughput. &lt;/p&gt;
&lt;p&gt;The trick is not only to configure iSCSI Multi Pathing using regular tutorials, but also to enable the Round Robin setting on each data store or each RAW device mapping. &lt;/p&gt;
&lt;p&gt;So I dit all this and still I got less than 1 Gb/s performance, but fortunately, there is only one little trick to get to the actual performance you might expect.&lt;/p&gt;
&lt;p&gt;I found this at multiple locations but the explanation on &lt;a href="http://jpaul.me/?p=2492"&gt;Justin's IT Blog&lt;/a&gt;
is best. &lt;/p&gt;
&lt;p&gt;By default, VMware issues 1000 IOPS to a NIC before switching (Round Robin) to the next one. This really hampers performance. You need to set this value to 1. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;esxcli storage nmp psp roundrobin deviceconfig set -d $DEV --iops 1 --type iops
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This configuration tweak is &lt;a href="http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA1-2185ENW.pdf"&gt;recommended by HP&lt;/a&gt;, see page 28 of the linked PDF. &lt;/p&gt;
&lt;p&gt;Once I configured all iSCSI paths to this setting, I got 350 MB/s of sequential write performance from a single VM to the datastore. That's decent enough for me.&lt;/p&gt;
&lt;p&gt;How do you do this? It's a simple one liner that sets the iops value to 1, but I'm so lazy, I don't want to copy/past devices and run the command by hand each time. &lt;/p&gt;
&lt;p&gt;I used a simple CLI script (VMware 5) to configure this setting for all devices.
SSH to the host and then run this script: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;for x in `esxcli storage nmp device list | grep ^naa`
do
    echo &amp;quot;Configuring Round Robin iops value for device $x&amp;quot;
    esxcli storage nmp psp roundrobin deviceconfig set -d $x --iops 1 --type iops
done
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is not the exact script I used, I have to verify this code, but basically it just configures this value for all storage devices. Devices that don't support this setting will raise an error message that can be ignored (if the VMware host also has some local SAS or SATA storage, this is expected). &lt;/p&gt;
&lt;p&gt;The next step is to check if this setting is permanent and survives a host reboot.&lt;/p&gt;
&lt;p&gt;Anyway, I verified the performance using a Linux VM and just writing a simple test file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;dd if=/dev/zero of=/storage/test.bin bs=1M count=30000
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;To see the Multi Pathing + Round Robin in action, run esxtop at the cli and then press N. You will notice that with four network cards, VMware will use all four channels available.&lt;/p&gt;
&lt;p&gt;This all is to say that plain old 1 Gbit iSCSI can still be fast. But I believe that 10 Gbit ethernet does probably provide better latency. If that's really an issue for your environment, is something I can't tell. &lt;/p&gt;
&lt;p&gt;Changing the IOPS parameter to 1 IOPS also seems to improve random I/O performance, according to the table in &lt;a href="http://jpaul.me/?p=2492"&gt;Justin's post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Still, although 1 Gbit iSCSI is cheap, it may be more difficult to get the appropriate performance levels you need. If you have time, but little money, it may be the way to go. However, if time is not on your side and money isn't the biggest problem, I would definitely investigate the price difference with going for fibre channel or with 10Gbit iSCSI.&lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>Creating storage benchmark charts with FIO and GNUplot</title><link href="https://louwrentius.com/creating-storage-benchmark-charts-with-fio-and-gnuplot.html" rel="alternate"></link><published>2013-05-22T20:00:00+02:00</published><updated>2013-05-22T20:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-05-22:/creating-storage-benchmark-charts-with-fio-and-gnuplot.html</id><summary type="html">&lt;hr&gt;

&lt;p&gt;&lt;strong&gt;Edit 2019&lt;/strong&gt;: I've made a &lt;a href="https://louwrentius.com/fio-plot-creating-nice-charts-from-fio-storage-benchmark-data.html"&gt;new tool&lt;/a&gt; called &lt;a href="https://github.com/louwrentius/fio-plot"&gt;'fio-plot'&lt;/a&gt;to create various graphs. &lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;I use &lt;a href="http://freecode.com/projects/fio"&gt;FIO&lt;/a&gt; to perform storage IO performance benchmarks. FIO does provide a script called "fio_generate_plots" which generates PNG or JPG based charts based on the data generated by FIO. The charts are created with &lt;a href="http://www.gnuplot.info"&gt;GNUplot&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The …&lt;/p&gt;</summary><content type="html">&lt;hr&gt;

&lt;p&gt;&lt;strong&gt;Edit 2019&lt;/strong&gt;: I've made a &lt;a href="https://louwrentius.com/fio-plot-creating-nice-charts-from-fio-storage-benchmark-data.html"&gt;new tool&lt;/a&gt; called &lt;a href="https://github.com/louwrentius/fio-plot"&gt;'fio-plot'&lt;/a&gt;to create various graphs. &lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;I use &lt;a href="http://freecode.com/projects/fio"&gt;FIO&lt;/a&gt; to perform storage IO performance benchmarks. FIO does provide a script called "fio_generate_plots" which generates PNG or JPG based charts based on the data generated by FIO. The charts are created with &lt;a href="http://www.gnuplot.info"&gt;GNUplot&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The "fio_generate_plots" didn't make me very happy as it didn't generate the kind of graphs I wanted. Furthermore, the script just contains some copy/pastes of the same blocks of code, slightly altered for the different benchmark types. I understand that the focus lies on FIO itself not some script to generate some fancy graphs, so don't get me wrong, but the script could be improved.&lt;/p&gt;
&lt;p&gt;I used this script as the basis for a significantly reworked version, putting the code in a function that can be called with different parameters for the different benchmark types. &lt;/p&gt;
&lt;p&gt;The result of this new script is something like this:&lt;/p&gt;
&lt;p&gt;&lt;img alt="benchmark" src="https://louwrentius.com/static/images/fio/Random-4K-write-performance-iops.svg" /&gt;&lt;/p&gt;
&lt;p&gt;You can &lt;a href="https://louwrentius.com/files/fio_generate_plots_reworked.sh"&gt;download this new script here&lt;/a&gt;. This script requires GNUplot 4.4 or higher.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update 2013/05/26&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I've submitted the script as a patch to the maintainers of FIO and it has been &lt;a href="http://git.kernel.dk/?p=fio.git;a=commitdiff;h=b35c036c8db9ece002b019f4a462a303ceb130fa;hp=4ac23d27a5e5dea73c4db4a4fcc46a6afe645bd0"&gt;committed&lt;/a&gt; to the source tree. I'm not sure how this will work out but I assume that this script will be part of newer FIO releases. &lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>Linode hacked: the dark side of cloud hosting</title><link href="https://louwrentius.com/linode-hacked-the-dark-side-of-cloud-hosting.html" rel="alternate"></link><published>2013-04-16T20:00:00+02:00</published><updated>2013-04-16T20:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-04-16:/linode-hacked-the-dark-side-of-cloud-hosting.html</id><summary type="html">&lt;p&gt;Linode has released an &lt;a href="http://blog.linode.com/2013/04/16/security-incident-update/"&gt;update&lt;/a&gt; about the security incident first reported
on April 12, 2013. &lt;/p&gt;
&lt;p&gt;The Linode Manager is the environment where you control your virtual private servers and where you pay for services. This is the environment that got compromised. &lt;/p&gt;
&lt;p&gt;Linode uses Adobe's ColdFusion as a platform for their …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Linode has released an &lt;a href="http://blog.linode.com/2013/04/16/security-incident-update/"&gt;update&lt;/a&gt; about the security incident first reported
on April 12, 2013. &lt;/p&gt;
&lt;p&gt;The Linode Manager is the environment where you control your virtual private servers and where you pay for services. This is the environment that got compromised. &lt;/p&gt;
&lt;p&gt;Linode uses Adobe's ColdFusion as a platform for their Linode Manager application. It &lt;a href="http://seclists.org/nmap-dev/2013/q2/3"&gt;seems&lt;/a&gt; that the ColdFusion software was affected by two significant, previously unknown vulnerabilities that allowed attackers to compromise the entire Linode VPS management environment. &lt;/p&gt;
&lt;p&gt;As the attackers had control over the virtual private servers hosted on the platform, they decided to compromise the VPS used by Nmap. Yes, the famous port scanner.&lt;/p&gt;
&lt;p&gt;Fyodor's remark about the incident:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;I guess we&amp;#39;ve seen the dark side of cloud hosting.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That's the thing. Cloud hosting is just an extra layer, an extra attack surface, that may provide an attacker with the opportunity to compromise your server and thus your data.&lt;/p&gt;
&lt;p&gt;Even the author of Nmap, a person fairly conscious about security and aware of the risk of cloud-hosting, still took the risk to save a few bucks and some time setting something up himself.&lt;/p&gt;
&lt;p&gt;If you are a Linode customer and consider becoming a former customer by fleeing to another cheap cloud VPS provider, are you really sure you are solving your problems? &lt;/p&gt;
&lt;p&gt;When using cloud services, you pay less and you outsource the chores that come with hosting on a dedicated private server. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;You also lose control over security.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Cloud hosting is just storing your data on &lt;a href="http://www.loper-os.org/?p=44"&gt;'Other People's Hard Drives&lt;/a&gt;. So the security of your stuff depends on those 'other people'. But did you ask those 'other people' for any information about how they tink to address risks like zero-days or other security threats? Or did you just consider their pricing, gave them your credit card and got on with your life?&lt;/p&gt;
&lt;p&gt;If you left Linode for another cloud VPS provider, what assures you that they will do better? How do you know that they aren't compromised already right now? At this moment? You feel paranoid already?&lt;/p&gt;
&lt;p&gt;We all want cheap hosting, but are you also willing to pay the price when the cloud platform is compromised?&lt;/p&gt;</content><category term="Security"></category></entry><entry><title>Linode hacked: thoughts about cloud security</title><link href="https://louwrentius.com/linode-hacked-thoughts-about-cloud-security.html" rel="alternate"></link><published>2013-04-16T00:00:00+02:00</published><updated>2013-04-16T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-04-16:/linode-hacked-thoughts-about-cloud-security.html</id><summary type="html">&lt;p&gt;I bought a Linode VPS for private usage just after the &lt;a href="http://blog.linode.com/2013/04/12/security-notice-linode-manager-password-reset/"&gt;report that Linode had reset all passwords&lt;/a&gt; of existing users regarding the Linode management console. &lt;/p&gt;
&lt;p&gt;Resetting passwords is not something you do when under a simple attack such as a DDOS attack. Such a measure is only taken if …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I bought a Linode VPS for private usage just after the &lt;a href="http://blog.linode.com/2013/04/12/security-notice-linode-manager-password-reset/"&gt;report that Linode had reset all passwords&lt;/a&gt; of existing users regarding the Linode management console. &lt;/p&gt;
&lt;p&gt;Resetting passwords is not something you do when under a simple attack such as a DDOS attack. Such a measure is only taken if you suspect or have proof of a serious security breach. I should have known.&lt;/p&gt;
&lt;p&gt;There are &lt;a href="https://news.ycombinator.com/item?id=5552756"&gt;strong&lt;/a&gt; &lt;a href="http://slashdot.org/firehose.pl?op=view&amp;amp;type=submission&amp;amp;id=2603667"&gt;rumours&lt;/a&gt; that Linode has actually been &lt;a href="http://turtle.dereferenced.org/~nenolod/linode/linode-abridged.txt"&gt;hacked&lt;/a&gt;. Although I signed up for a Linode VPS after the attack, I still checked my creditcard for any suspicious withdrawals. &lt;/p&gt;
&lt;p&gt;Linode is as of this writing very silent about the topic, which only fuels my, and every other's suspicion that something bad has happened.&lt;/p&gt;
&lt;p&gt;Whatever happened, even it isn't as bad as it seems, such an incident as this should make you evaluate your choices about hosting your apps and data on cloud services. &lt;/p&gt;
&lt;p&gt;I don't care that much about rumours that creditcard information may have been compromised. Although in itself quite damning, what I do care is about the security of the data stored in the virtual private servers hosted on their platform. &lt;/p&gt;
&lt;p&gt;I like this phase: &lt;a href="http://www.loper-os.org/?p=44"&gt;"There is no cloud, only Other People's Hard Drives"&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Everybody uses cloud services, so we all put our data in the hands of some other third party and we just hope that they properly secured their environment.&lt;/p&gt;
&lt;p&gt;The cynical truth is that even so, a case can be made that for many companies, data stored in the cloud or on a VPS is a lot safer than within their own company IT environment. But an incident like this may prove otherwise. &lt;/p&gt;
&lt;p&gt;And if you believe that data on a VPS is more secure than within your own IT environment, I believe that you have more pressing problems. The thing is that it doesn't tell you anything about the security of those cloud solutions. It only tells you something about the perceived security of your own IT environment. &lt;/p&gt;
&lt;p&gt;The cloud infrastructure is just another layer between the metal and your services, and it can thus be attacked. It increases the attack surface. It increases the risk of a compromise. The cloud doesn't make your environment more secure, on the contrary.&lt;/p&gt;
&lt;p&gt;So anyway, who performs regular security audits of Linode or (insert your current cloud hosting provider?) and what is the quality of the processes that should assure security at all times?&lt;/p&gt;
&lt;p&gt;Questions. Questions.&lt;/p&gt;
&lt;p&gt;This incident again shows that you should clearly think about what kind of security your company or customer data warrants. Is outsourcing security of your data acceptable?&lt;/p&gt;
&lt;p&gt;Maybe, if security is an important factor, those cheap VPS hosts aren't that cheap after all. You may be better off creating your own private cloud on (rented or owned) dedicated servers and put a little bit more effort in it. &lt;/p&gt;
&lt;p&gt;Building your own environment on your own equipment is more expensive than just a simple VPS, but you are much more in control regarding security. &lt;/p&gt;</content><category term="Security"></category></entry><entry><title>Storage and I/O: reads vs. writes</title><link href="https://louwrentius.com/storage-and-io-reads-vs-writes.html" rel="alternate"></link><published>2013-04-02T00:00:00+02:00</published><updated>2013-04-02T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-04-02:/storage-and-io-reads-vs-writes.html</id><summary type="html">&lt;p&gt;There is a fundamental difference between a read operation and a write operation. Storage can lie about completing a write operation, but it can never lie about completing a read operation. Therefore read and writes have different characteristics. This is what I've learned.&lt;/p&gt;
&lt;h3&gt;About writes&lt;/h3&gt;
&lt;p&gt;So what does this mean …&lt;/p&gt;</summary><content type="html">&lt;p&gt;There is a fundamental difference between a read operation and a write operation. Storage can lie about completing a write operation, but it can never lie about completing a read operation. Therefore read and writes have different characteristics. This is what I've learned.&lt;/p&gt;
&lt;h3&gt;About writes&lt;/h3&gt;
&lt;p&gt;So what does this mean? Well, if you write data to disk, the I/O subsystem only has to acknowledge that it has written the data to the actual medium. Basically, the application says "please write this data to disk" and the I/O subsystem answers "done, feel free to give me another block of data!". &lt;/p&gt;
&lt;p&gt;But the application cannot be sure that the I/O subsystem actually wrote that data to disk. More likely, the application can be sure the I/O subsystem lied.&lt;/p&gt;
&lt;p&gt;Compared to RAM, non-volatile storage like hard-drives are slow. Orders of magnitudes slower. And the worst-case scenario, which is often also the real-life scenario, is that both read and write patterns are random as perceived from the storage I/O subsystem. &lt;/p&gt;
&lt;p&gt;So you have this mechanical device with rotating platters and a moving arm, governed by Newtons rules of physics, trying to compete with CPUs and memory that are so small that they are affected by quantum mechanical effects. No way that device is going to be able to keep up with that. &lt;/p&gt;
&lt;p&gt;So the I/O subsystem cheats. Hard drives are relatively great at reading and writing blocks of data sequentially, it's the random access patterns that wreaks havoc on performance. So the trick is to lie to the application and collect a bunch of writes in a cache, in memory. &lt;/p&gt;
&lt;p&gt;So, meanwhile, the I/O subsystem looks at the data to be written to disk, and reorders the write operations, so that it becomes as 'serialised' as possible. It tries to take into account all the latencies involved in moving the arm, timing that with the rotation of the platter and that kind of stuff.&lt;/p&gt;
&lt;p&gt;A 7200 RPM hard drive can do only 75 IOPS with random access patterns, but that is a worst-case of worst-case scenario's. Real-life usage scenario's often allow for some optimalisation. &lt;/p&gt;
&lt;p&gt;I used FIO to perform some random-IO performance benchmarks on different hard drive types and RAID configurations. It turns out that read performance was conform the 75 IOPS, but writes where in the thousands of IOPS, not a realistic figure. The operating system (Linux) employed heavy caching of the writes, lying to FIO about the actual IOPS being written to disk. &lt;/p&gt;
&lt;p&gt;Thousands of IOPS sounds great, but you can only lie until your write cache is full. There comes a time when you have to actually deliver and write this data to disk. This is where you see large drops in performance, to almost zero IOPS.&lt;/p&gt;
&lt;p&gt;Most of the time, this behaviour is overall beneficial to application performance, as long as the application usage patterns are often short bursts of data, that need to be written to disk. With more steady streams of data being written to disk in a random order, this might influence application responsiveness. The application might become periodically unresponsive as data is flushed from the cache to disk.&lt;/p&gt;
&lt;p&gt;This write-caching behaviour is often desired, because by reordering and optimising the order of the write requests, the actual overall obtained random I/O write performance is often significantly higher than could be achieved by the disk subsystem itself. &lt;/p&gt;
&lt;p&gt;If the disk subsystem is not just a single disk, but a RAID array, comprised of
multiple drives, write-caching is often even more important to keep performance acceptable, especially for RAID arrays with parity, such as RAID 5 and RAID 6.&lt;/p&gt;
&lt;p&gt;Write-back caching may help increase performance significantly, but it may come at a cost. As the I/O subsystem lies about data being written to disk, that data may get lost if the system crashes or loses power. There is a risk of data loss or data corruption. Only use write-back caching on equipment that is supported by battery backup units and a UPS. Due to the risks associated with write-back caching, there might be use cases where it might be advised not to enable it to retain data consistency. &lt;/p&gt;
&lt;h3&gt;About reads&lt;/h3&gt;
&lt;p&gt;The I/O subsystem can't lie to the application about reads. If the application 
asks the I/O subsystem "can I have the contents of file X", the I/O subsystem can't just say "well, yes, sure". It actually has to deliver that data. So any arbitrary write can be easily cached and written to disk in a more optimised way, reads may be harder. There is no easy way out, the I/O subsystem must deliver.&lt;/p&gt;
&lt;p&gt;Where any arbitrary write can be cached, only a limited number of reads can be cached. Cache memory is relatively small compared to the storage of the disk subsystem. The I/O subsystem must be smart about which data needs to be cached.&lt;/p&gt;
&lt;p&gt;More complex storage solutions keep track of 'hot spots' and keep that data cached. As a side note, such caching constructions can now also be found in consumer grade equipment: Apple's fusion drive uses the SSD as a cache and stores the data that is less frequently accessed on the HDD.&lt;/p&gt;
&lt;p&gt;But in the end, regarding reads, chances are higher that data must be retrieved that is not stored in cache (cache miss) and thus the drives must do actual work. Fortunately, that work is not as 'expensive' as writes for RAID 5 or RAID 6 arrays. &lt;/p&gt;
&lt;p&gt;Furthermore, reads can also be 'grouped' and serialised (increased queue depth) at the cost of latency to optimise them (setup a more sequential read access pattern for the disk subsystem) and achieve better performance. But again, at the cost of latency, thus responsiveness. That may or
may not be a problem depending of the type of application. &lt;/p&gt;
&lt;h3&gt;Some remarks&lt;/h3&gt;
&lt;p&gt;If possible, it's better to try and avoid having to access the storage subsystem in the first place, if possible. Try and trow RAM memory at the problem. Buy systems with sufficient RAM memory, so that the entire database fits in RAM memory. A few years ago this was unthinkable, but 128 GB of RAM memory can be had for less than two thousand dollars.&lt;/p&gt;
&lt;p&gt;If RAM isn't an option (dataset is too large) still try and put in as much RAM as possible. Also, try and see if server grade Solid State Drives (SSDs) are an option (always RAID 1 at least for redundancy!), although their cost may be an obstacle.&lt;/p&gt;
&lt;p&gt;The gateway of last resort is the old trusted hard drive. If random I/O is really an issue, take a look at 15000 RPM or at least 10000 RPM SAS drives and a good RAID controller with loads of cache memory. In general, more drives or more 'spindles' equals more I/O performance.&lt;/p&gt;
&lt;p&gt;You might encounter a situation where you want to add drives to increase I/O performance, not for the storage. More important: you may choose not to use that extra storage as it may decrease performance. Because if you put more data on a disk, the head must cover larger areas of the disk platter, increasing latency. &lt;/p&gt;
&lt;p&gt;There are usecases where drives are intentionally under-partitioned to (artificially) increase the performance of the drives.&lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>Benchmark results of Random I/O performance of different RAID levels</title><link href="https://louwrentius.com/benchmark-results-of-random-io-performance-of-different-raid-levels.html" rel="alternate"></link><published>2013-01-01T00:00:00+01:00</published><updated>2013-01-01T00:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2013-01-01:/benchmark-results-of-random-io-performance-of-different-raid-levels.html</id><summary type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;I have performed some benchmarks to determine how different RAID levels perform when handling a 100% random workload of 4K requests. This is a worst-case scenario for almost every storage subsystem. Normal day-to-day workloads may not be that harsh in a real-life environment, but worst-case, these tests show what …&lt;/p&gt;</summary><content type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;I have performed some benchmarks to determine how different RAID levels perform when handling a 100% random workload of 4K requests. This is a worst-case scenario for almost every storage subsystem. Normal day-to-day workloads may not be that harsh in a real-life environment, but worst-case, these tests show what kind of performance you might expect when you face such a workload. &lt;/p&gt;
&lt;p&gt;To create a worst-case worst-case solution, I even disabled write-caching for all write-related tests.&lt;/p&gt;
&lt;p&gt;At the moment, I only have access to some consumer-level test hardware. In the future, I'd like to rerun these tests on some 10K RPM SAN storage drives to see how this turns out. &lt;/p&gt;
&lt;h3&gt;RAID levels tested&lt;/h3&gt;
&lt;p&gt;I have tested the following RAID levels:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;RAID 0&lt;/li&gt;
&lt;li&gt;RAID 10&lt;/li&gt;
&lt;li&gt;RAID 5&lt;/li&gt;
&lt;li&gt;RAID 6&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Test setup&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;CPU: Intel Core i5 2400s @ 2.5 GHz&lt;/li&gt;
&lt;li&gt;RAM: 4 GB&lt;/li&gt;
&lt;li&gt;Drives: 6 x 500 GB, 7200 RPM drives (SATA). &lt;/li&gt;
&lt;li&gt;Operating system: Ubuntu Linux&lt;/li&gt;
&lt;li&gt;RAID: Build-in Linux software RAID (MDADM)&lt;/li&gt;
&lt;li&gt;File system: XFS&lt;/li&gt;
&lt;li&gt;Test file size: 10 GB&lt;/li&gt;
&lt;li&gt;Test software: &lt;a href="http://freecode.com/projects/fio"&gt;FIO&lt;/a&gt; &lt;a href="https://louwrentius.com/files/random-read-template.fio"&gt;read-config&lt;/a&gt; &amp;amp; &lt;a href="https://louwrentius.com/files/random-write-template.fio"&gt;write-config&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Queue depth: 4&lt;/li&gt;
&lt;li&gt;&lt;a href="https://louwrentius.com/files/raid-tester.sh"&gt;Test script&lt;/a&gt; that generates RAID arrays, file systems and runs the tests.&lt;/li&gt;
&lt;li&gt;Cache: all write caching was disabled during testing (see script)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Test results&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/fio/Random-4K-read-performance-lat.svg"&gt;&lt;img alt="read latency" src="https://louwrentius.com/static/images/fio/Random-4K-read-performance-lat.svg" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/fio/Random-4K-read-performance-iops.svg"&gt;&lt;img alt="read iops" src="https://louwrentius.com/static/images/fio/Random-4K-read-performance-iops.svg" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/fio/Random-4K-read-performance-bw.svg"&gt;&lt;img alt="read bw" src="https://louwrentius.com/static/images/fio/Random-4K-read-performance-bw.svg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/fio/Random-4K-write-performance-lat.svg"&gt;&lt;img alt="write latency" src="https://louwrentius.com/static/images/fio/Random-4K-write-performance-lat.svg" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/fio/Random-4K-write-performance-iops.svg"&gt;&lt;img alt="write iops" src="https://louwrentius.com/static/images/fio/Random-4K-write-performance-iops.svg" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/fio/Random-4K-write-performance-bw.svg"&gt;&lt;img alt="write bw" src="https://louwrentius.com/static/images/fio/Random-4K-write-performance-bw.svg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I also tested various chunk sizes for each RAID level. These are the results for RAID 10.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/fio/RAID-chunk-size-and-read-performance-iops.svg"&gt;&lt;img alt="read iops chunk" src="https://louwrentius.com/static/images/fio/RAID-chunk-size-and-read-performance-iops.svg" /&gt;&lt;/a&gt;
&lt;a href="https://louwrentius.com/static/images/fio/RAID-chunk-size-and-write-performance-iops.svg"&gt;&lt;img alt="write iops chunk" src="https://louwrentius.com/static/images/fio/RAID-chunk-size-and-write-performance-iops.svg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you don't see any images, you are not using Internet Explorer 9, or a recent version of Google Chrome, Mozilla Firefox or Apple Safari.&lt;/p&gt;
&lt;h3&gt;Analysis&lt;/h3&gt;
&lt;p&gt;With this kind of testing, there are so many variables that it will be difficult to make any solid observations. But these results are interesting. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Results are in line with reality&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;First of all, the results do not seem unexpected. Six drives at 7200 RPM should each provide about 75 IOPS. This should result in a total of 450 IOPS for the entire array. The read performance does show exactly this kind of performance. &lt;/p&gt;
&lt;p&gt;With all caching disabled, write performance is worse. And especially the RAID levels with parity (RAID 5 and RAID 6) show a significant drop in performance when it comes to random writes. RAID 6 write performance got so low and erratic that I wonder if there is something wrong with the driver or the setup. Especially the I/O latency is off-the-charts with RAID 6, so there must be something wrong. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Read performance is equal for all RAID levels&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;However, the most interesting graphs are about IOPS and latency. Read performance of all different RAID arrays is almost equal. RAID 10 seems to have the upper hand in all read benchmarks. I'm not sure why this is. Both bandwidth and latency are better than the other RAID levels. I'm really curious about a good technical explanation about why this should be expected. &lt;em&gt;Edit&lt;/em&gt;: RAID10 is basically multiple RAID 1 sets stuck together. Data is striped across RAID 1 sets. When reading, a single stripe can be deliverd by both disks in the particular RAID mirror it resides on, thus there is a higher risk that one of the heads is in the vicinity of the requested sector.&lt;/p&gt;
&lt;p&gt;RAID 0 is not something that should be used in a production environment, but it is included to provide a comparison for the other RAID levels. The IOPS graph regarding write performance is most telling. With RAID 10 using 6 drives, you only get the effective IOPS of 3 drives, thus about 225 IOPS. This is exactly what the graph is showing us. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Raid with parity suffers regarding write performance&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;RAID 5 needs four write I/Os for every application-level write request. So with 6 x 75 = 450 IOPS divided by 4, we get 112,5 IOPS. This is also on par with the graph. This is still ok, but notice the latency: it is clearly around 40 milliseconds, whereas 20 milliseconds is the rule of thumb where performance will start to significantly degrade.&lt;/p&gt;
&lt;p&gt;RAID 6 needs six write I/Os for every application-level write request. So with 450 IOPS total, divided by 6, we only have single-disk performance of 75 IOPS. If we average the line, we do approximately get this performance, but the latency is so erratic that it would not be usable.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;RAID chunk size and performance&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;So I was wondering if the RAID array chunk size does impact random I/O performance. It seems not. &lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Overall, the results seem to indicate that the actual testing itself is realistic. We do get figures that are in tune with theoretical results. &lt;/p&gt;
&lt;p&gt;The erratic RAID 6 write performance would need a thorougher explanation, one that I can't give. &lt;/p&gt;
&lt;p&gt;Based on the test results, it seems that random I/O performance for a single test file is not affected by the chunk size or stripe size of an RAID array.&lt;/p&gt;
&lt;p&gt;The results show to me that my benchmarking method provides a nice basis for further testing. &lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>Statistics showing relevance of caching proxy</title><link href="https://louwrentius.com/statistics-showing-relevance-of-caching-proxy.html" rel="alternate"></link><published>2012-12-18T01:00:00+01:00</published><updated>2012-12-18T01:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-12-18:/statistics-showing-relevance-of-caching-proxy.html</id><summary type="html">&lt;p&gt;In this day and age of dynamic web content, how relevant can a caching proxy server be? I believe that the answer could be: quite!&lt;/p&gt;
&lt;p&gt;I have installed a caching proxy server based on Squid, which is now used within my company. It also does content scanning using &lt;a href="http://squidclamav.darold.net"&gt;squidclamav&lt;/a&gt; and …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In this day and age of dynamic web content, how relevant can a caching proxy server be? I believe that the answer could be: quite!&lt;/p&gt;
&lt;p&gt;I have installed a caching proxy server based on Squid, which is now used within my company. It also does content scanning using &lt;a href="http://squidclamav.darold.net"&gt;squidclamav&lt;/a&gt; and Clamav. I wrote an &lt;a href="https://louwrentius.com/blog/2012/08/setting-up-a-squid-proxy-with-clamav-anti-virus-using-c-icap/"&gt;article&lt;/a&gt; about how to setup such a content scanning proxy.&lt;/p&gt;
&lt;p&gt;The thing is that I didn't much care for the actual caching functionality of Squid, I deemed the content-scanning part more interesting. But I'm quite pleased with the actual caching hit ratio. &lt;/p&gt;
&lt;p&gt;&lt;img alt="proxy stats" src="https://louwrentius.com/static/images/proxyisrelevant.gif" /&gt;&lt;/p&gt;
&lt;p&gt;It seems that we have a hit ratio between 20% to 25% and that is more than I expected. Most content is dynamic in nature, so I would expect that most content
is not cached but it seems that there is still quite some data that can be cached. This must also improve the end-user surfing experience as latency for downloading content should be reduced. &lt;/p&gt;
&lt;p&gt;Of course, this is just a sample for the last hour. However, multiple measurements at different moments yield similar results. &lt;/p&gt;
&lt;p&gt;I think this result proves that a caching proxy server is still relevant, especially if you don't have a fast internet connection. If you do, you can still improve the overall browsing experience due to the fact that data is cached. &lt;/p&gt;
&lt;p&gt;There is a caveat: the proxy server itself also introduces latency. I haven't performed a side-by-side comparison and measured actual responsiveness of browsing with or without a proxy. &lt;/p&gt;</content><category term="Networking"></category></entry><entry><title>Understanding IOPS, latency and storage performance</title><link href="https://louwrentius.com/understanding-iops-latency-and-storage-performance.html" rel="alternate"></link><published>2012-11-25T21:00:00+01:00</published><updated>2012-11-25T21:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-11-25:/understanding-iops-latency-and-storage-performance.html</id><summary type="html">&lt;hr&gt;
&lt;p&gt;Update 2020: I've written &lt;a href="https://louwrentius.com/understanding-storage-performance-iops-and-latency.html"&gt;another blogpost&lt;/a&gt; about this topic, including some benchmark examples.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;When most people think about storage performance, they think about throughput. But throughput is similar to the top speed of a car. In reality, you will almost never reach the top speed of your car (unless you …&lt;/p&gt;</summary><content type="html">&lt;hr&gt;
&lt;p&gt;Update 2020: I've written &lt;a href="https://louwrentius.com/understanding-storage-performance-iops-and-latency.html"&gt;another blogpost&lt;/a&gt; about this topic, including some benchmark examples.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;When most people think about storage performance, they think about throughput. But throughput is similar to the top speed of a car. In reality, you will almost never reach the top speed of your car (unless you are living in Germany). And that's fine, because in most situations that's not so relevant. &lt;/p&gt;
&lt;p&gt;For instance, properties like how fast your car accelerates and how well the car handles bends and corners are often more important than its top speed. And this example also holds for storage performance. &lt;/p&gt;
&lt;p&gt;Most people know that SSDs are often way faster than regular mechanical hard drives. But it's not about the throughput of these devices. Its all about Input/Output operations per second (IOPS). If you can handle a high number of IOPS, that is great for real life application performance. But IOPS does not tell you the whole story. To be more precise: IOPS is a meaningless figure unless tied to an average latency and a certain request size (how much data is processed with the I/O). Let's first focus on IOPS and Latency and talk about the request size later.&lt;/p&gt;
&lt;p&gt;&lt;img alt="latency" src="https://louwrentius.com/static/images/io03.png" /&gt;&lt;/p&gt;
&lt;p&gt;Latency is how fast a single I/O-request is handled. This is very important, because a storage subsystem that can handle 1000 IOPS with an average latency of 10ms may get better application performance than a subsystem that can handle 5000 IOPS with an average latency of 50ms. Especially if the application is sensitive to latency, such as a database service. &lt;/p&gt;
&lt;p&gt;This is a very important thing to understand: how IOPS and latency relate to each other. Here, the car analogy probably breaks down. We need a different one to better understand what is going on. So picture you are in a super market. This is a special supermarket, where customers (I/Os) are served by cashiers (disk) at an average speed of 10ms. If you divide one second with 10ms, we understand that this cashier can handle 100 customers per second. But only one at a time, in succession. &lt;/p&gt;
&lt;p&gt;&lt;img alt="serial" src="https://louwrentius.com/static/images/io04.png" /&gt;&lt;/p&gt;
&lt;p&gt;It is clear that although the cashier can handle 100 customers per second, he cannot handle them at the same time! So when a customer arrives at the register, and within those 10ms handling time, a second customer arrives, that customer must wait. Once the waiting customer is handled by the cashier, handling of that customer still takes just 10ms, but the overal processing time was maybe 15ms or worst case (two customers arriving at the same time) even 20ms.&lt;/p&gt;
&lt;p&gt;&lt;img alt="queue" src="https://louwrentius.com/static/images/io055.png" /&gt;&lt;/p&gt;
&lt;p&gt;So it is very important to understand that although a disk may handle individual I/Os with an average latency of 10ms, the actual latency as perceived by the application may be higher as some I/Os must wait in line. &lt;/p&gt;
&lt;p&gt;This example also illustrates that waiting in line increases the latency for a particular I/O to be handled. So if you increase the Read I/O queue, you will notice that the average latency will increase. Longer queues will mean higher latency, but also more IOPS!!!&lt;/p&gt;
&lt;p&gt;&lt;img alt="queue 4" src="https://louwrentius.com/static/images/io02.png" /&gt;&lt;/p&gt;
&lt;p&gt;How is that possible? How can a disk drive suddenly do more random IOPs at the cost of latency? The trick lies in that the storage subsystem can be smart and look at the queue and then order the I/Os in such a way that the actual access pattern to disk will be more serialised. So a disk can serve more IOPS/s at the cost of an increase in average latency. Depending on the achieved latency and the performance requirements of the application layer, this can be acceptable or not.&lt;/p&gt;
&lt;p&gt;In future blog posts I will show some performance benchmarks of a single drive to illustrate these examples.&lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>Linux: get a list of al disks and their size</title><link href="https://louwrentius.com/linux-get-a-list-of-al-disks-and-their-size.html" rel="alternate"></link><published>2012-11-25T02:00:00+01:00</published><updated>2012-11-25T02:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-11-25:/linux-get-a-list-of-al-disks-and-their-size.html</id><summary type="html">&lt;p&gt;To get a list of all disk drives of a Linux system, such as this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Disk /dev/md0: 58.0 GB
Disk /dev/md1: 2015 MB
Disk /dev/md5: 18002.2 GB
Disk /dev/sda: 60.0 GB
Disk /dev/sdb: 60.0 GB
Disk /dev/sdc: 1000.1 GB …&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</summary><content type="html">&lt;p&gt;To get a list of all disk drives of a Linux system, such as this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Disk /dev/md0: 58.0 GB
Disk /dev/md1: 2015 MB
Disk /dev/md5: 18002.2 GB
Disk /dev/sda: 60.0 GB
Disk /dev/sdb: 60.0 GB
Disk /dev/sdc: 1000.1 GB
Disk /dev/sdd: 1000.1 GB
Disk /dev/sde: 1000.1 GB
Disk /dev/sdf: 1000.1 GB
Disk /dev/sdg: 1000.1 GB
Disk /dev/sdh: 1000.1 GB
Disk /dev/sdi: 1000.1 GB
Disk /dev/sdj: 1000.1 GB
Disk /dev/sdk: 1000.1 GB
Disk /dev/sdl: 1000.1 GB
Disk /dev/sdm: 1000.1 GB
Disk /dev/sdn: 1000.1 GB
Disk /dev/sdo: 1000.1 GB
Disk /dev/sdp: 1000.1 GB
Disk /dev/sdq: 1000.1 GB
Disk /dev/sdr: 1000.1 GB
Disk /dev/sds: 1000.2 GB
Disk /dev/sdt: 1000.2 GB
Disk /dev/sdu: 1000.2 GB
Disk /dev/sdv: 1000.2 GB
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can use the following command:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;#!/bin/bash
for x in `cat /proc/diskstats | grep -o &amp;#39;sd.\|hd.\|md.&amp;#39; | sort -u`
do 
    fdisk -l /dev/$x 2&amp;gt;/dev/nul| grep &amp;#39;Disk /&amp;#39; | cut -d &amp;quot;,&amp;quot; -f 1 
done
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="Linux"></category></entry><entry><title>Why VMware vSphere replication is changing the game</title><link href="https://louwrentius.com/why-vmware-vsphere-replication-is-changing-the-game.html" rel="alternate"></link><published>2012-11-12T23:00:00+01:00</published><updated>2012-11-12T23:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-11-12:/why-vmware-vsphere-replication-is-changing-the-game.html</id><summary type="html">&lt;p&gt;If you are running a serious VMware environment, chances are you do have a SAN. Often with smaller setups, many people do employ multiple VMware hosts, but the SAN is a single point of failure.&lt;/p&gt;
&lt;p&gt;SANs are often fully redundant devices, with redundant PSUs, storage controllers, network links and RAID …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you are running a serious VMware environment, chances are you do have a SAN. Often with smaller setups, many people do employ multiple VMware hosts, but the SAN is a single point of failure.&lt;/p&gt;
&lt;p&gt;SANs are often fully redundant devices, with redundant PSUs, storage controllers, network links and RAID arrays.  But with all that redundancy build in, they still fail. I've seen it happen and a failing SAN is the worst.&lt;/p&gt;
&lt;p&gt;So I'd rather have two cheap entry level SANs if possible than just a single big one and keeping my fingers crossed that they won't fail.&lt;/p&gt;
&lt;p&gt;A redundant SAN environment where you basically deploy two separate SAN devices with their own storage needs a replication setup. And replication between SAN environments needs often extra licences or more expensive setups. And the replication mechanism must play well with your virtualisation layer, such as VMware.&lt;/p&gt;
&lt;p&gt;But the fun thing is that VMware made everything way simpler by integrating their own &lt;a href="http://www.google.nl/url?sa=t&amp;amp;rct=j&amp;amp;q=vmware%20vsphere%20replication%205.1&amp;amp;source=web&amp;amp;cd=1&amp;amp;cad=rja&amp;amp;sqi=2&amp;amp;ved=0CCYQFjAA&amp;amp;url=http%3A%2F%2Fwww.vmware.com%2Ffiles%2Fpdf%2Ftechpaper%2FIntroduction-to-vSphere-Replication.pdf&amp;amp;ei=0muhUPayNMfe4QS2woGIDg&amp;amp;usg=AFQjCNERmOjUUuCjMaNGBAU9RZ4--By4JQ"&gt;storage replication&lt;/a&gt; into their vSphere product. I have no experience whatsoever with VMware's new build-in replication feature. But I believe that it is significant. &lt;/p&gt;
&lt;p&gt;&lt;a href="http://pubs.vmware.com/vsphere-51/topic/com.vmware.ICbase/PDF/vsphere-replication-51-admin.pdf"&gt;Replication&lt;/a&gt; is a new feature introduced in VMware vSphere 5.1 that is now part of the vSphere Essentials Plus Kit and vSphere Standard. So if you start with two or three VMware hosts and two entry-level SAN devices, you can be quite redundant and can have a fully redundant setup. And that will cost you around 3800 Euro ~ 4800 US dollar. &lt;/p&gt;
&lt;p&gt;The Essentials Plus Kit is a nice environment for smaller companies, but license-wise, it does not scale as you are stuck with a maximum of 6 physical CPUs and a maximum of three hosts. However it seems that when you need to expand beyond that capacity, you can trade in your existing license and obtain a discount when upgrading to - for example - vSphere Standard or Enterprise.&lt;/p&gt;
&lt;p&gt;The most significant thing about the build-in replication is that it does not matter any more what you use for your storage backend. If you use two entirely different devices from different vendors, that's OK. Because VMware handles all the replication stuff. Those SAN devices become just dumb storage boxes. Most of us just can configure whatever does support iSCSI.&lt;/p&gt;
&lt;p&gt;You could even try and be cheap and setup your own &lt;a href="https://louwrentius.com/blog/2009/07/20-disk-18-tb-raid-6-storage-based-on-debian-linux/"&gt;homegrown storage box&lt;/a&gt;es. 
It may not have all the cool features of a true SAN, but at least you have redundant storage. &lt;/p&gt;
&lt;p&gt;I'm really curious about this feature and I hope it works well. I'm seriously considering deploying this for the VMware setup of the company I currently work for. It does however require an extra external host that manages the actual replication, which may add to the cost. &lt;/p&gt;
&lt;p&gt;Any comments are welcome. &lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Personal Security: erase your computer or phone before repair</title><link href="https://louwrentius.com/personal-security-erase-your-computer-or-phone-before-repair.html" rel="alternate"></link><published>2012-11-04T01:00:00+01:00</published><updated>2012-11-04T01:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-11-04:/personal-security-erase-your-computer-or-phone-before-repair.html</id><summary type="html">&lt;p&gt;Computer nerds are self sufficient when it comes to fixing their computer. Non-computer experts have to find some other person with greater computer knowledge to repair their computer or phone. That person will then be able to access all data stored on their computer or phone.&lt;/p&gt;
&lt;p&gt;By handing over their …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Computer nerds are self sufficient when it comes to fixing their computer. Non-computer experts have to find some other person with greater computer knowledge to repair their computer or phone. That person will then be able to access all data stored on their computer or phone.&lt;/p&gt;
&lt;p&gt;By handing over their computer to a third party, such as a computer repair shop, they are giving their personal data to a stranger. And it is so easy for that stranger to access this data. &lt;a href="http://www.youtube.com/watch?v=H4hAgRVPPq8"&gt;So they will&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is not only true for computers, but especially for phones. If you are a women, you should be extra concerned. It is so easy to obtain access to your photos. &lt;a href="http://www.theregister.co.uk/2012/11/03/verizon_nude_pics_theft/"&gt;And people do&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The only safe thing to do is either:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;encrypt your computer with full disk encryption (&lt;a href="http://www.truecrypt.org"&gt;Truecrypt?&lt;/a&gt;);&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.dban.org"&gt;wipe&lt;/a&gt; all internal hard drives.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Both actions will make it impossible for the computer technician to resolve any operating system or software related issues. Also, it will be harder to diagnose hardware failure. And if you erase the computer, who is going to reinstall it?&lt;/p&gt;
&lt;p&gt;A third option would be to implement a secure file container where a user would put personal information. But this concept is way too hard to understand and implement for most users. &lt;/p&gt;
&lt;p&gt;So in the end most people must find a person they can trust and who is willing to fix their computer. But that is never a safe bet. &lt;/p&gt;
&lt;p&gt;So assuming that you must trust your computer to a person you don't know too well, it is smart to never store any content, especially personal pictures or videos on your computer that you would not want them to see.&lt;/p&gt;
&lt;p&gt;I had to turn in my iMac for repair because the internal hard drive was dying. So I erased the entire disk by overwriting it with zeros. This takes a few hours, but it guarantees that my data will not fall in the wrong hands. Honestly, I don't have any data I'd really want to hide, but still, it's my data and I don't want it in the hands of unknown people. &lt;/p&gt;</content><category term="Security"></category></entry><entry><title>Why I bought a digital projector (Panasonic PT-AT5000E)</title><link href="https://louwrentius.com/why-i-bought-a-digital-projector-panasonic-pt-at5000e.html" rel="alternate"></link><published>2012-10-21T00:00:00+02:00</published><updated>2012-10-21T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-10-21:/why-i-bought-a-digital-projector-panasonic-pt-at5000e.html</id><summary type="html">&lt;p&gt;I don't have a TV. I haven't been watching TV for more than 10 years. But I love to watch movies or great series like Dexter and Game of Thrones. Until recently, I watched movies or series on my 27" iMac. Twenty-seven inch is large for a computer screen but …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I don't have a TV. I haven't been watching TV for more than 10 years. But I love to watch movies or great series like Dexter and Game of Thrones. Until recently, I watched movies or series on my 27" iMac. Twenty-seven inch is large for a computer screen but for a TV, it's quite small.&lt;/p&gt;
&lt;p&gt;&lt;img alt="home setup" src="https://louwrentius.com/static/images/setup-01-sm-sm.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;When it comes to watching movies, a bigger screen is always better. The big screen of a movie theater is the most important reason for movie enthousiasts to experience a movie in the theatre. But there is a way to get that movie theatre feel into your living room: get a digital projector. That's what I did.&lt;/p&gt;
&lt;p&gt;I decided to get a ceiling mounted projection screen that I could lower just in front of my computer and table. I chose the "celexon Rollo Economy with a dimension of 277 x 156 cm (109 x 61 inch). This would give me a TV screen with a theoretical size of &lt;em&gt;125 inch&lt;/em&gt; in diameter! In practice, the actual projected image is around 120 inch.&lt;/p&gt;
&lt;p&gt;Just think about that. A 120 inch television. Hundred-and-twenty inch. Three metres diagonal. &lt;/p&gt;
&lt;p&gt;&lt;img alt="screen retracted" src="https://louwrentius.com/static/images/projectionscreen01.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The screen can be lowered by just pulling on the edge. I have to reach for it on my toes but it's ok. As pictured below, the screen has an additional black area  so it can be lowered further away from the ceiling. This way, the effect of the ceiling reflecting light back to the screen, reducing picture quality, is reduced. It allows you to have the screen at the desired hight, you don't need to look up.&lt;/p&gt;
&lt;p&gt;&lt;img alt="screen retracted" src="https://louwrentius.com/static/images/projectionscreen02.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;At the other end of the room, I decided to mount a projector on a ceiling mount. My iMac would sent all digital content through a long HDMI cable to the projector. &lt;/p&gt;
&lt;p&gt;After doing some research I decided to buy the &lt;a href="http://www.panasonic.co.uk/html/en_GB/Products/Home+Entertainment/Projectors/PT-AT5000E/Overview/8073212/index.html"&gt;Panasonic PT-AT5000E&lt;/a&gt; also known as the PT-AE7000. This digital projector supports Full HD 1080p at 1920x1080. It also supports 3D but I'm not really interested in that. &lt;/p&gt;
&lt;p&gt;&lt;img alt="projector" src="https://louwrentius.com/static/images/panasonicprojector.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I have never seen a movie on a digital projector before and I can't compare it to other digital projectors. But one thing is sure. Watching a 1080p movie on a 120" display projected by this digital projector is quite an experience. To me the picture is astounding. It is almost if you are in a theatre. A digital projector may not always be practical, but it is surely awesome. &lt;/p&gt;
&lt;p&gt;I do have some remarks about the whole setup with a digital projector:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;you need a big screen and mount it somewhere;&lt;/li&gt;
&lt;li&gt;the projector is just gigantic and you must accept that it hangs from your ceiling somewhere, something your spouse may not like;&lt;/li&gt;
&lt;li&gt;you need to run some extra cables for power and HDMI through your house;&lt;/li&gt;
&lt;li&gt;you need to put the lens cap on when the device is not in use, often a hassle;&lt;/li&gt;
&lt;li&gt;the lamp takes several minutes to reach optimal operational temperature;&lt;/li&gt;
&lt;li&gt;although my panasonic projector is almost silent, it does make noise;&lt;/li&gt;
&lt;li&gt;you must have good curtains to block all external light or you won't see a thing;&lt;/li&gt;
&lt;li&gt;you need to wait for movies to be released on Blu-ray or the Internet to be able to watch them;&lt;/li&gt;
&lt;li&gt;the lamp will wear out and replacements are expensive;&lt;/li&gt;
&lt;li&gt;if anyone would ask you about the size of your TV you would have to lie or feel a douchebag for telling the truth. &lt;/li&gt;
&lt;li&gt;Non-HD content looks totally crap and you need to make the picture smaller just to stand it.&lt;/li&gt;
&lt;li&gt;the whole home theatre setup is just expensive and you could have bought a lot of theatre tickets for that money;&lt;/li&gt;
&lt;li&gt;black levels are often not as good as plasma TV's.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;But in the end I would not hesitate to fork out the money for a home theatre setup like this. Because of the downside of movie theatres:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;other people. They are the cause that you don't have the best seat in front of the screen;&lt;/li&gt;
&lt;li&gt;more other people, who text, talk or phone during the movie; &lt;/li&gt;
&lt;li&gt;even more other people, who eat crappy food, making noise and drink crappy liquids as if it where a slurping contest;&lt;/li&gt;
&lt;li&gt;you need to pay money;&lt;/li&gt;
&lt;li&gt;you need to sit in some crappy movie theatre chair;&lt;/li&gt;
&lt;li&gt;you must pay top dollar for a drink or food;&lt;/li&gt;
&lt;li&gt;no pauses so no toilet breaks; &lt;/li&gt;
&lt;li&gt;you need to go there, and be there at a specific time;&lt;/li&gt;
&lt;li&gt;you need to stand many commercials before the movie actually starts;&lt;/li&gt;
&lt;li&gt;the sound quality is often horrible.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So in my opinion, if you you are willing to spend money on it, I would advice to seriously look into getting a digital projector. &lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Experiences running ZFS on Ubuntu Linux 12.04</title><link href="https://louwrentius.com/experiences-running-zfs-on-ubuntu-linux-1204.html" rel="alternate"></link><published>2012-10-18T23:00:00+02:00</published><updated>2012-10-18T23:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-10-18:/experiences-running-zfs-on-ubuntu-linux-1204.html</id><summary type="html">&lt;p&gt;I really like ZFS because with current data sets, I do believe that data corruption may start becoming an issue. The thing is that the license under which ZFS is released does not permit it to be used in the Linux kernel. That's quite unfortunate, but there is hope. There …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I really like ZFS because with current data sets, I do believe that data corruption may start becoming an issue. The thing is that the license under which ZFS is released does not permit it to be used in the Linux kernel. That's quite unfortunate, but there is hope. There is a project called &lt;a href="http://zfsonlinux.org"&gt;'ZFS on Linux'&lt;/a&gt; which provides ZFS support through a kernel module, circumventing any license issues.&lt;/p&gt;
&lt;p&gt;But as ZFS is a true next generation file system and the only one in its class stable enough for production use, I decided to give it a try.&lt;/p&gt;
&lt;p&gt;I used my existing download server running Ubuntu 12.04 LTS. I followed these steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;move all data to my big storage nas;&lt;/li&gt;
&lt;li&gt;destroy the existing MDADM RAID arrays;&lt;/li&gt;
&lt;li&gt;recreate a new storage array through ZFS;&lt;/li&gt;
&lt;li&gt;move all data back to the new storage array.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Installation of ZFS is straight forward and well documented by the ZFSonLinux project. The main thing is how you setup your storage. My download server has six 500 GB disks and four 2 TB disks, thus a total of ten drives. So I decided to create a single zpool (logical volume) consisting of two vdevs (arrays). I thus created a vdev of 6 500 GB drives and a second vdev of the four 2 TB drives.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;root@server:~# zpool status
  pool: zpool
 state: ONLINE
 scan: scrub repaired 0 in 1h12m with 0 errors on Fri Sep  7 
config:

    NAME                               STATE   READ WRITE CKSUM
    zpool                              ONLINE     0     0     0
      raidz1-0                         ONLINE     0     0     0
        pci-0000:03:04.0-scsi-0:0:1:0  ONLINE     0     0     0
        pci-0000:03:04.0-scsi-0:0:2:0  ONLINE     0     0     0
        pci-0000:03:04.0-scsi-0:0:3:0  ONLINE     0     0     0
        pci-0000:03:04.0-scsi-0:0:4:0  ONLINE     0     0     0
      raidz1-1                         ONLINE     0     0     0
        pci-0000:00:1f.2-scsi-2:0:0:0  ONLINE     0     0     0
        pci-0000:00:1f.2-scsi-3:0:0:0  ONLINE     0     0     0
        pci-0000:03:04.0-scsi-0:0:0:0  ONLINE     0     0     0
        pci-0000:03:04.0-scsi-0:0:5:0  ONLINE     0     0     0
        pci-0000:03:04.0-scsi-0:0:6:0  ONLINE     0     0     0
        pci-0000:03:04.0-scsi-0:0:7:0  ONLINE     0     0     0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So the zpool consists of two vdevs that each consist of the physical drives. &lt;/p&gt;
&lt;p&gt;Everything is going smooth so far. I did have one issue though. I decided to remove a separate disk drive from the system that was no longer needed. As I initially setup the arrays based on device names (/dev/sda, /dev/sdb), the array broke as device names changed due to the missing drive. &lt;/p&gt;
&lt;p&gt;So I repared that by issuing these commands:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;zpool export zpool
zpool import zpool -d /dev/disk/by-path/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It's important to carefully read the &lt;a href="http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool"&gt;FAQ of ZFS on Linux&lt;/a&gt; and understand that you should not use regular device names like /dev/sda for your ZFS array. 
It is recommended to use /dev/disk/by-path/ or /dev/disk/zpool/ exactly to prevent the issue I had with the disappeared drive.&lt;/p&gt;
&lt;p&gt;As discussed in my &lt;a href="https://louwrentius.com/blog/2011/02/why-i-do-not-use-zfs-as-a-file-system-for-my-nas/"&gt;blog entry&lt;/a&gt; on why I decided not to use ZFS for my big 18 TB storage NAS, ZFS does not support 'growing' of an array as Linux software RAID does. &lt;/p&gt;
&lt;p&gt;As the zpool consists of different hard disk types, performance tests are not consistent. I've seen 450 MB/s read speeds on the zpool, which is more than sufficient for me.&lt;/p&gt;
&lt;p&gt;ZFS on Linux works, is fast enough and easy to setup. If I would have setup my big storage NAS today, I would probably have chosen ZFS on Linux by now. I would have accepted that I could not just expand the array with extra drives the way MDADM permits you to grow an array.&lt;/p&gt;
&lt;p&gt;In some way, ZFS on Linux is combining the best of both world. One of the best modern file systems with a modern and well-supported Linux distribution. Only the ZFS module itself may be the weak factor as it's fairly new for Linux and not optimised yet. &lt;/p&gt;
&lt;p&gt;Or we might have to just wait until BTFS is mature enough for production use.&lt;/p&gt;</content><category term="ZFS"></category></entry><entry><title>Compiling Multicore PAR2 on Ubuntu 12.04 LTS Precise Pangolin</title><link href="https://louwrentius.com/compiling-multicore-par2-on-ubuntu-1204-lts-precise-pangolin.html" rel="alternate"></link><published>2012-09-16T01:00:00+02:00</published><updated>2012-09-16T01:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-09-16:/compiling-multicore-par2-on-ubuntu-1204-lts-precise-pangolin.html</id><summary type="html">&lt;p&gt;If you want to compile PAR2 with multicore support on Linux, it may not work right away from source. I used &lt;a href="http://www.chuchusoft.com/par2_tbb/download.html"&gt;this version of PAR2 with multicore support&lt;/a&gt;.
Update 2015: the original link is dead, I foud a copy of the source and put it on my own site &lt;a href="https://louwrentius.com/files/par2cmdline-0.4-tbb-20100203.tar.gz"&gt;here …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you want to compile PAR2 with multicore support on Linux, it may not work right away from source. I used &lt;a href="http://www.chuchusoft.com/par2_tbb/download.html"&gt;this version of PAR2 with multicore support&lt;/a&gt;.
Update 2015: the original link is dead, I foud a copy of the source and put it on my own site &lt;a href="https://louwrentius.com/files/par2cmdline-0.4-tbb-20100203.tar.gz"&gt;here for you to download.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;First, make sure that you have these libraries on your system:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;libtbb-dev
libaio-dev
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;According to &lt;a href="https://forums.sabnzbd.org/viewtopic.php?f=1&amp;amp;t=1220"&gt;this source&lt;/a&gt;, after downloading the source, you need to add this line:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;#include &amp;lt;backward/auto_ptr.h&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;To these files:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;par2cmdline.cpp
commandline.cpp
par2creator.cpp
par2repairer.cpp
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then edit the Makefile and find the LDADD variable. Add the -lrt option like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;LDADD = -lstdc++ -ltbb -lrt -L.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This did the trick for me and compiled the PAR2 source with multicore support.
Hope it helps somebody.&lt;/p&gt;
&lt;p&gt;If you want to use Multicore PAR2 with SABNZBD you need to go to the Config menu. Then select 'Special' and enable 'par2_multicore'. Save the changes. Then go to 'Switches' and enter '-t+' at the Extra PAR2 Parameters field. &lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Setting up a Squid proxy with Clamav anti-virus using c-icap</title><link href="https://louwrentius.com/setting-up-a-squid-proxy-with-clamav-anti-virus-using-c-icap.html" rel="alternate"></link><published>2012-08-26T22:00:00+02:00</published><updated>2012-08-26T22:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-08-26:/setting-up-a-squid-proxy-with-clamav-anti-virus-using-c-icap.html</id><summary type="html">&lt;p&gt;Security is all about a defence-in-depth strategy. Create multiple layers of defence. Every layer presenting a different set of challenges, requiring different skill sets and technology. So every layer will increase the time and effort to compromise your environment. &lt;/p&gt;
&lt;p&gt;A content-scanning proxy server may provide you with one of these …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Security is all about a defence-in-depth strategy. Create multiple layers of defence. Every layer presenting a different set of challenges, requiring different skill sets and technology. So every layer will increase the time and effort to compromise your environment. &lt;/p&gt;
&lt;p&gt;A content-scanning proxy server may provide you with one of these defensive layers. A content scanning proxy checks all data for malware. It blocks all content presumed to be infected. This may prevent numerous infections of company computers. Basically, the proxy server is virusscanning all network traffic. &lt;/p&gt;
&lt;p&gt;&lt;img alt="warning" src="https://louwrentius.com/static/images/squidwarning.png" /&gt;&lt;/p&gt;
&lt;p&gt;But there is a severe limitation. Any data requested through an SSL-connection (https://) cannot be scanned, precisely because it is encrypted. So if a blackhat is smart and serves all malware through HTTPS, a content scanning proxy will not stop that malware. There are man-in-the-midle solutions that do allow you to inspect SSL traffic, but there are some limitations and this is outside the scope of this post.&lt;/p&gt;
&lt;p&gt;As I believe that most malware is still being served through unencrypted HTTP sites, a content-scanning proxy does create an extra layer of defence. I think it is worth the effort.&lt;/p&gt;
&lt;p&gt;So I decided to create a content-scanning proxy based on available open-source software. In this case, open-source as in free to use in commercial settings. &lt;/p&gt;
&lt;p&gt;So in this post I will document how to setup a content-scanning proxy based on &lt;a href="http://www.squid-cache.org"&gt;Squid 3.1&lt;/a&gt;, &lt;a href="http://c-icap.sourceforge.net"&gt;c-icap&lt;/a&gt; version 1, the &lt;a href="http://squidclamav.darold.net"&gt;Squidclamav&lt;/a&gt; module and the &lt;a href="http://www.clamav.net/lang/en/"&gt;Clamav&lt;/a&gt; anti-virus scanner.&lt;/p&gt;
&lt;p&gt;The basis of this proxy server is Ubuntu 12.10 LTS. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Important&lt;/em&gt;: &lt;/p&gt;
&lt;h3&gt;How does it work?&lt;/h3&gt;
&lt;p&gt;The Squid proxy server must pass all content to the Clamav daemon. Squid can't do that by itself. It's needs some glue service. For this purpose, a standard protocol has been designed called 'ICAP'.  The c-icap daemon, combined with the squidclamav module, is the glue between the proxy server and the anti-virus software. The fun thing about c-icap is that you can add extra content scanning features if you want, just by adding those modules. You can decide to implement additional commercial anti-virus products in addition to Clamav.&lt;/p&gt;
&lt;h3&gt;Installing Clamav and c-icap + development files&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;1. apt-get install clamav-daemon c-icap  libicapapi-dev apache2
2. freshclam (update clamav on the spot)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Apache or any other HTTP server with CGI support is required to display virus-warnings to end-users.&lt;/p&gt;
&lt;h3&gt;Installing squidclamav module for c-icap&lt;/h3&gt;
&lt;p&gt;Do not install squidclamav with apt-get, this version seems to contain bugs that prevent pages from loading properly.
The latest version straight from the vendor does work properly.   &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;1. cd /usr/src/
2. download the source from: 
&amp;quot;http://sourceforge.net/projects/squidclamav/&amp;quot;
3. tar xzf squidclamav-6.8.tar.gz
4. cd squidclamav-6.8
5. ./configure
6. make -j 2
7. make install
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Squid configuration&lt;/h3&gt;
&lt;p&gt;Please download my sample &lt;a href="https://louwrentius.com/files/proxy-squid.txt"&gt;Squid.conf&lt;/a&gt; configuration. The icap lines are of interest.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024

icap_service service_req reqmod_precache bypass=0 \ 
    icap://127.0.0.1:1344/squidclamav
icap_service service_resp respmod_precache bypass=0 \ 
    icap://127.0.0.1:1344/squidclamav

adaptation_access service_req allow all
adaptation_access service_resp allow all
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It is the icap:// URL that calls a particular icap service (squidclamav) that processes all data.&lt;/p&gt;
&lt;h3&gt;Squidclamav icap module configuration&lt;/h3&gt;
&lt;p&gt;The configuration is stored in /etc/squidclamav.conf, and this is what I used:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;maxsize 5000000
redirect http://proxy.company.local/cgi-bin/clwarn.cgi
clamd_ip 127.0.0.1
clamd_port 3310
timeout 1
logredir 0
dnslookup 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Of interest is the redirect url, which tells the user that a virus is found.
That line redirects the user towards a page as shown at the beginning of this post.
You can customise this page with CSS, for example, you can add the company logo to make it more official.&lt;/p&gt;
&lt;h3&gt;c-icap configuration&lt;/h3&gt;
&lt;p&gt;This is the configuration I use:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;PidFile /var/run/c-icap/c-icap.pid
CommandsSocket /var/run/c-icap/c-icap.ctl
Timeout 300
MaxKeepAliveRequests 100
KeepAliveTimeout 600  
StartServers 3
MaxServers 10
MinSpareThreads     10
MaxSpareThreads     20
ThreadsPerChild     10
MaxRequestsPerChild  0
Port 1344 
User c-icap
Group nogroup
ServerAdmin you@your.address
ServerName Anti-Virus-Proxy
TmpDir /tmp
MaxMemObject 1048576
DebugLevel 0
ModulesDir /usr/lib/c_icap
ServicesDir /usr/lib/c_icap
TemplateDir /usr/share/c_icap/templates/
TemplateDefaultLanguage en
LoadMagicFile /etc/c-icap/c-icap.magic
RemoteProxyUsers off
RemoteProxyUserHeader X-Authenticated-User
RemoteProxyUserHeaderEncoded on
ServerLog /var/log/c-icap/server.log
AccessLog /var/log/c-icap/access.log
Service echo srv_echo.so
Service squidclamav squidclamav.so
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Configuring Apahce web server&lt;/h3&gt;
&lt;p&gt;The warning page should be put in /usr/lib/cgi-bin. You may have to copy clwarn.cgi into this directory.
Also make sure that your Apache configuration contains a directive like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
&amp;lt;Directory &amp;quot;/usr/lib/cgi-bin&amp;quot;&amp;gt;
        AllowOverride None
        Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
        Order allow,deny
        Allow from all
&amp;lt;/Directory&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Automatic proxy configuration through DHCP and WPAD&lt;/h3&gt;
&lt;p&gt;To make the entire setup extra nice, use your DHCP configuraiton to
inform clients about the proxy configuration. Clients must be configured to autodetect proxy settings for this to work.&lt;/p&gt;
&lt;p&gt;Put a wpad.dat in the root directory of your http server:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;function FindProxyForURL(url, host)
{
    if (dnsDomainIs(host, &amp;quot;localhost&amp;quot;)) return &amp;quot;DIRECT&amp;quot;; 
    if (isInNet(host, &amp;quot;127.0.0.0&amp;quot;, &amp;quot;255.0.0.0&amp;quot;)) return &amp;quot;DIRECT&amp;quot;;
    if (isPlainHostName(host)) return &amp;quot;DIRECT&amp;quot;;
    if (isInNet(host, &amp;quot;192.168.0.0&amp;quot;, &amp;quot;255.255.255.0&amp;quot;)) return &amp;quot;DIRECT&amp;quot;;
    return &amp;quot;PROXY proxy.company.local:3128&amp;quot;;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And also add the appropriate mime type for .dat files in /etc/mime.types&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;application/x-ns-proxy-autoconfig           dat
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Restart the apache webserver after these modifications.&lt;/p&gt;
&lt;p&gt;Now add the proxy to the DNS configuration of your DNS server like proxy.company.local.&lt;/p&gt;
&lt;p&gt;Most important, add this directive to the general portion of the configuration file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;option local-proxy-config code 252 = text;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Add this directive to the particular scope for your network:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;option local-proxy-config &amp;quot;http://proxy.company.local/wpad.dat&amp;quot;;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Restart your DNS and DHCP server.&lt;/p&gt;
&lt;h3&gt;Monitoring proxy performance&lt;/h3&gt;
&lt;p&gt;The cagemgr.cgi file provides very detailed information about the performance of your Squid proxy.
This is more relevant regarding actual cahcing performance than for anti-virus scanning, but this may be of interest.
Especially the 'general runtime information' is of interest, as it shows the hit-rate, memory usage, etc.&lt;/p&gt;
&lt;p&gt;First, make sure you take the appropirate precautions as not to expose this page to the entire company
network without some protection, as it can contain sensitive information.&lt;/p&gt;
&lt;p&gt;If you have installed squid-cgi just browse to http://your.proxy.server/cgi-bin/cachemgr.cgi&lt;/p&gt;
&lt;p&gt;Some example data:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Cache information for squid:
    Hits as % of all requests:  5min: 10.3%, 60min: 4.1%
    Hits as % of bytes sent:    5min: 81.4%, 60min: 5.2%
    Memory hits as % of hit requests:   5min: 0.0%, 60min: 14.8%
    Disk hits as % of hit requests: 5min: 0.0%, 60min: 74.1%
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Final words&lt;/h3&gt;
&lt;p&gt;This whole configuration shouldbe sufficient to setup a content-scanning proxy server. I have no experience how well this solution performs and you may have to do some benchmarks if your own to determine if it is capable of handling the traffic users generate. The fun thing about this setup is that it is modular. For example, you can have one Squid + HTTP box, and a separate host
just for the c-icap service and Clamav service. &lt;/p&gt;
&lt;p&gt;Besides the whole content scanning part, a proxy server, based on some non-scientific tests, does seem to
improve performance for end-users. It may save you an expensive upgrade to a faster corporate internet connection.&lt;/p&gt;</content><category term="Security"></category></entry><entry><title>HP Proliant Microserver N40L is a great NAS or Router</title><link href="https://louwrentius.com/hp-proliant-microserver-n40l-is-a-great-nas-or-router.html" rel="alternate"></link><published>2012-07-29T09:00:00+02:00</published><updated>2012-07-29T09:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-07-29:/hp-proliant-microserver-n40l-is-a-great-nas-or-router.html</id><summary type="html">&lt;p&gt;&lt;em&gt;Update 2012-12-11:&lt;/em&gt; It seems that a &lt;a href="http://www.wegotserved.com/2011/12/21/hp-planning-powerful-proliant-microserver"&gt;new and faster&lt;/a&gt; version is on the horizon. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Update 2012-12-21:&lt;/em&gt; Yes, the new model &lt;a href="http://h10010.www1.hp.com/wwpc/uk/en/sm/WF06b/15351-15351-4237916-4237917-4237917-4248009-5336624.html?dnr=1"&gt;G7 N54L&lt;/a&gt; is out.&lt;/p&gt;
&lt;p&gt;Some products seem almost too good to be true and I think the &lt;a href="http://h10010.www1.hp.com/wwpc/me/en/sm/WF06b/15351-15351-4237916-4237917-4237917-4248009-5163346.html?dnr=1"&gt;HP Proliant Microserver N40L&lt;/a&gt; is one of them. If you are into the …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;em&gt;Update 2012-12-11:&lt;/em&gt; It seems that a &lt;a href="http://www.wegotserved.com/2011/12/21/hp-planning-powerful-proliant-microserver"&gt;new and faster&lt;/a&gt; version is on the horizon. &lt;/p&gt;
&lt;p&gt;&lt;em&gt;Update 2012-12-21:&lt;/em&gt; Yes, the new model &lt;a href="http://h10010.www1.hp.com/wwpc/uk/en/sm/WF06b/15351-15351-4237916-4237917-4237917-4248009-5336624.html?dnr=1"&gt;G7 N54L&lt;/a&gt; is out.&lt;/p&gt;
&lt;p&gt;Some products seem almost too good to be true and I think the &lt;a href="http://h10010.www1.hp.com/wwpc/me/en/sm/WF06b/15351-15351-4237916-4237917-4237917-4248009-5163346.html?dnr=1"&gt;HP Proliant Microserver N40L&lt;/a&gt; is one of them. If you are into the market for a very small, silent, efficient, yet capable home server, please take this device into consideration. I picked this device up for 200 euro's which is a bargain in my opinion.&lt;/p&gt;
&lt;p&gt;First, take a look. As you can determine from the size of the 5 1/4 inch bay, this device is really small. The fun thing is though, that behind that door is room for four 3.5 inch SATA hard drives.&lt;/p&gt;
&lt;p&gt;&lt;img alt="hp1" src="https://louwrentius.com/static/images/hp1.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;So you can put four large SATA disks into this device. It is just ideal as a home NAS, without resorting to expensive QNAP or Synology devices, which may not give you the flexibility you want. &lt;/p&gt;
&lt;p&gt;&lt;img alt="hp2" src="https://louwrentius.com/static/images/hp2.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The on-board RAID controller only seems to support RAID 0 and RAID 1. If you want to make a NAS out of this device you want to go for RAID 5. So you have two options. &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Buy an additional hardware RAID controller that supports RAID 5;&lt;/li&gt;
&lt;li&gt;Use Linux or BSD software RAID and don't spend a dime.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Processor&lt;/h3&gt;
&lt;p&gt;It contains the AMD equivalent of Intel's Atom processor, the Turion II Neo N40L dual-core, which runs at 1.5 Ghz. This CPU is not fast, but it is energy efficient and it helps keeping the device silent and cheap.&lt;/p&gt;
&lt;h3&gt;Memory&lt;/h3&gt;
&lt;p&gt;The device contains just 2 GB of ECC RAM. Sufficient for most tasks, but you can crank it up to 8 GB. The fact that you get ECC RAM in this device is a real plus, making this device extra reliable. &lt;/p&gt;
&lt;h3&gt;Disks&lt;/h3&gt;
&lt;p&gt;By default, a 250 GB disks is included. How they do that for this money is something I don't get. This disk takes up one of the four drive bays.         &lt;/p&gt;
&lt;p&gt;Personally, I would not use the 5 1/4 slot for an optical drive, (who uses them anyway in a server), instead, I would look into a solution where you can put the stock drive into that space, to make room for an additional disk for file storage. Useful in case you are building a NAS.&lt;/p&gt;
&lt;p&gt;You may even install additional 2.5" disks with solutions like &lt;a href="http://www.sharkoon.com/?q=en/node/1824"&gt;this&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Expansion&lt;/h3&gt;
&lt;p&gt;The microserver has two half-height PCIe slots, one x16 and one x1. It has also an esata connector at the back, so you can connect an external disks for backups or something. There are two USB ports at the back, four at the front. I wish they put four at the back and two at the front.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/hp3g.jpg"&gt;&lt;img alt="details" src="https://louwrentius.com/static/images/hp3.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;See also &lt;a href="http://h18000.www1.hp.com/products/quickspecs/13716_div/13716_div.html"&gt;this page&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Environment&lt;/h3&gt;
&lt;p&gt;The device is very economic, I estimate power consumption at about 25 watt when idle. I measured 35 watt through my UPS, but there where also two external disk drives and a network switch connected to the UPS.&lt;/p&gt;
&lt;p&gt;Noise levels are also excelent. There are just two fans. One very large fan at the back, that just seem to cool the entire device. The second fan is housed inside the tiny power supply, but although it is small, the fan makes little noise. When it is idle, you don't hear this server running.&lt;/p&gt;
&lt;h3&gt;Compatibility&lt;/h3&gt;
&lt;p&gt;I was able to install Ubuntu 12.04 LTS out-of-the-box. Is running fine. I didn't test any other operating systems. &lt;/p&gt;
&lt;h3&gt;Reason for purchase&lt;/h3&gt;
&lt;p&gt;I wanted to replace my Linux router (an old Mac Mini) with a device that can house two disk drives, so I could implement RAID 1. I use it as a Router/Firewall. But I also run a website on it, some monitoring software, so that's why I didn't want to buy a regular Linksys or Zytel embedded router.&lt;/p&gt;
&lt;p&gt;Although this server has only one network interface, I use VLAN tagging with a VLAN-capable switch, so this is not a problem. Otherwise, I would just add a second Gigabyte half-height PCIe NIC.&lt;/p&gt;
&lt;h3&gt;Final words&lt;/h3&gt;
&lt;p&gt;It's an ideal device for any computer enthousiast who wants more flexibility than a standard NAS or embedded router can offer. It's cheap, small, silent and power efficient. Those HP engineers who created this device should get a thumbs up.&lt;/p&gt;</content><category term="Hardware"></category></entry><entry><title>Fully unattended deployment of Windows clients using limited resources</title><link href="https://louwrentius.com/fully-unattended-deployment-of-windows-clients-using-limited-resources.html" rel="alternate"></link><published>2012-07-07T19:00:00+02:00</published><updated>2012-07-07T19:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-07-07:/fully-unattended-deployment-of-windows-clients-using-limited-resources.html</id><summary type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;Anyone who ever installed Windows on a computer by hand must have wished for a solution that automate this task. It's just waiting a lot and pressing a button now and then. But installing the operating system itself is only the beginning. Once installed, you need to apply service …&lt;/p&gt;</summary><content type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;Anyone who ever installed Windows on a computer by hand must have wished for a solution that automate this task. It's just waiting a lot and pressing a button now and then. But installing the operating system itself is only the beginning. Once installed, you need to apply service packs or at least about a hundred or more security updates. When finished, you need to install all additional software, like an office suite, PDF reader, anti-virus software and the like.&lt;/p&gt;
&lt;p&gt;So you need to install:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;the operating system&lt;/li&gt;
&lt;li&gt;applications &lt;/li&gt;
&lt;li&gt;security updates&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you do this all by hand, it will probably take at least half a day, if not even longer. This is a major problem, because sooner or later you may have to hire somebody full time to do just the computer deployments. Expect a high job turnover rate. You definitely want to automate this task, saving money on extra sysadmins but more importantly: &lt;em&gt;quality&lt;/em&gt;. &lt;/p&gt;
&lt;p&gt;Even if you have to install one computer every week, you must automate this process for the sole reason that if you don't, no two deployed computers are the same. People make mistakes, especially with boring, repetitive tasks. So automation improves quality and reduces the workload significantly. &lt;/p&gt;
&lt;p&gt;If you don't deploy your end-user computers through some kind of automation, you need to stop what you are doing right now and build such a solution. It's fundamental to provide good quality service to your users. &lt;/p&gt;
&lt;p&gt;It must be fully unattended or as unattended as possible. You may have to press a button to initiate the process at that start, but that must be all that is required to deploy a system. If during deployment, you need to touch the computer in order for it to continue deploying, you have a bug that needs to be fixed asap.&lt;/p&gt;
&lt;p&gt;So, in this post I want to show you that with minimal resources, you can create a fully unattended solution for Windows desktop systems. There are probably better ways to do this, but for me, this was enough. &lt;/p&gt;
&lt;h3&gt;Imaging versus automated deployment&lt;/h3&gt;
&lt;p&gt;It's very simple. Do not image. Do not use products like Norton Ghost or Clonezilla for system deployment. Imaging is not flexible. For every change, you need to create a new image. For every hardware model, you need to create a new image. Every program update requires a new image. Instead of installing computers by hand, you are maintaining images. It does not scale.&lt;/p&gt;
&lt;p&gt;Automated installations on the other hand do scale. They are dynamic. They just use whatever drivers they need during installation, as long as they are available. Just updating the installer of an application is sufficient to make sure that future deployments are up-to-date. Flexibility is key.&lt;/p&gt;
&lt;h3&gt;Solution overview&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Clients use &lt;a href="http://en.wikipedia.org/wiki/Preboot_Execution_Environment"&gt;PXE&lt;/a&gt; to boot from the network. They boot a special Windows Embedded kernel that bootstraps the Windows installation process. &lt;/li&gt;
&lt;li&gt;The operating system and drivers are installed.&lt;/li&gt;
&lt;li&gt;All company software is installed.&lt;/li&gt;
&lt;li&gt;All security patches are installed.&lt;/li&gt;
&lt;li&gt;When ready, a mail is sent to the sysadmins&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You will need:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A DHCP server&lt;/li&gt;
&lt;li&gt;A WDS server&lt;/li&gt;
&lt;li&gt;A KMS server and valid KMS licence&lt;/li&gt;
&lt;li&gt;Valid Windows 7 ISO for KMS installation&lt;/li&gt;
&lt;li&gt;An unattended configuration created with WAIK&lt;/li&gt;
&lt;li&gt;Driver packs for the various desktop an laptop models&lt;/li&gt;
&lt;li&gt;A domain account dedicated for deployment&lt;/li&gt;
&lt;li&gt;A list + executables of all software required for the client&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;About KMS and Windows licences&lt;/h3&gt;
&lt;p&gt;In a larger environment, with 25+ desktops and laptops, it becomes to cumbersome to type in the product licence key and activate the systems by hand. This does not scale. You need a Volume Licence agreement for Windows 7 or higher in order to be able to use a Key Management Server and a special ISO of Windows 7 that does not require a product key. Learn more about this in &lt;a href="https://louwrentius.com/blog/2012/06/understanding-windows-kms-and-mak-volume-license-activation/"&gt;this blogpost&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Windows Deployment services&lt;/h3&gt;
&lt;p&gt;The basis for automated deployment is &lt;a href="http://en.wikipedia.org/wiki/Windows_Deployment_Services"&gt;Windows Deployment Services&lt;/a&gt;. This software made available for free by Microsoft allows clients to PXE boot and perform unattended operating system installations.&lt;/p&gt;
&lt;p&gt;Unattended operating system installations are guided by XML files that describe the configuration for the operatings system. Such a configuration file is authored with the &lt;a href="http://www.microsoft.com/en-us/download/details.aspx?id=5753"&gt;Windows Automated Installation Kit&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;WDS uses two images: a boot image and an install image. Because computers need drivers, you need to download and inject the drivers in the boot image. All major vendors supply special complete driver packages for you to download. Just download, extract and import. Create driver groups for every model, to orden your drivers.&lt;/p&gt;
&lt;p&gt;You may choose to install all drivers in one image. But that image can grow large and lengten the installation time. To resolve this, create separate boot images for different vendors and differentiate between model lines. This is not much work but it keeps the boot images small. This is not required.&lt;/p&gt;
&lt;h3&gt;Windows Automated Installation Kit&lt;/h3&gt;
&lt;p&gt;You need the WAIK to author the XML file used by WDS to configure the unattended installation. You must specify hard disk partitioning, some default settings and the like. This is also where you configure the command to run when the operating system installation has finished. This will start the software installation phase.&lt;/p&gt;
&lt;h3&gt;Automated silent software deployment&lt;/h3&gt;
&lt;p&gt;For software installation, I just go back to my MS-DOS 4.11 days and use a simple batch script that installs all software.&lt;/p&gt;
&lt;p&gt;Every product, such as Adobe Reader or Java, has an installation batch file. There is one main batch file that calls each program install batch file to install it and log the results for debugging.&lt;/p&gt;
&lt;p&gt;It is that simple. And it works perfectly. The most important task is to find out for each product how you can install it silently, without user intervention.
Fortunately, almost all products provide command line arguments for unattended installation. &lt;/p&gt;
&lt;p&gt;Software is installed by using a domain-based unprivileged user that uses autologon to logon to the system, with local administrative privileges. Once the installation is complete, local admin privileges are revoked.&lt;/p&gt;
&lt;h3&gt;Installing all security updates&lt;/h3&gt;
&lt;p&gt;This is the hard part. There are several problems. First, after you install all updates, more updates seem to be available after the next reboot. Furthermore, using Windows 7, a &lt;a href="http://marc.durdin.net/2012/02/further-analysis-on-trustedinstallerexe.html"&gt;memory leak&lt;/a&gt; causes the installation proces to take ages. &lt;/p&gt;
&lt;p&gt;The solution is to install smaller batches of patches, such as 30 or 40 at a time. You can use a &lt;a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa387102(v=vs.85).aspx"&gt;script&lt;/a&gt; for that as supplied by Microsoft. This script must be changed not to install all patches, but a fixed number at a time.&lt;/p&gt;
&lt;p&gt;So you need several reboots to install all patches and run the VBS update script several times. The WAIK provides an option for 'autologon'. So you can have a user account logon for like 5 times. After that, no autologon is performed ever again.&lt;/p&gt;
&lt;p&gt;So you place a special batch file in the startup folder of the autologon user that triggers the Windows update process every time the autologon is performed.
This is the last step of the installation.&lt;/p&gt;
&lt;p&gt;After five autologons, the system will boot to the logon screen and the system is done.&lt;/p&gt;
&lt;h3&gt;Additional resources&lt;/h3&gt;
&lt;p&gt;Large organisations may use &lt;a href="http://www.microsoft.com/nl-nl/server-cloud/system-center/operations-manager-2012.aspx"&gt;Microsoft System Center Operations Manager&lt;/a&gt; but I assume that such a solution has not been setup. I asume, that you are in an environment without any existing solution that may help you out.&lt;/p&gt;
&lt;p&gt;I would also investigate the &lt;a href="http://www.microsoft.com/en-us/download/details.aspx?id=25175"&gt;Microsoft Deployment Toolkit 2012&lt;/a&gt;. Instead of tinkering with batch files and vbs scripts, this may help you also. However, it seems to focus on creating images or automate the task of creating images, rather than just automate the installation of a client. &lt;/p&gt;
&lt;h3&gt;Final thoughts&lt;/h3&gt;
&lt;p&gt;Please note that I had to research this solution within a few weeks, with lots of other things to do. It was just one project of many other projects. There may be better solutions to automate system deployments. Maybe the MDT is a better approach, but I haven't tested it (yet). The current setup is sufficient for now and it frees us to start other much needed projects.&lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Understanding Windows KMS and MAK volume license activation</title><link href="https://louwrentius.com/understanding-windows-kms-and-mak-volume-license-activation.html" rel="alternate"></link><published>2012-06-09T21:00:00+02:00</published><updated>2012-06-09T21:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-06-09:/understanding-windows-kms-and-mak-volume-license-activation.html</id><summary type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;If you have to administer a large number of PCs running Windows, you will end up creating an automated deployment platform for your Windows clients. You may implement something like &lt;a href="http://technet.microsoft.com/en-us/library/cc770628(v=ws.10).aspx"&gt;Windows Deployment Services&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;I used WDS to create a fully automated installation of PCs. WDS can also be used …&lt;/p&gt;</summary><content type="html">&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;If you have to administer a large number of PCs running Windows, you will end up creating an automated deployment platform for your Windows clients. You may implement something like &lt;a href="http://technet.microsoft.com/en-us/library/cc770628(v=ws.10).aspx"&gt;Windows Deployment Services&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;I used WDS to create a fully automated installation of PCs. WDS can also be used for creating images, but using images doesn't scale as you need too much manual intervention with the devices themselves and you need to update images constantly.&lt;/p&gt;
&lt;p&gt;With WDS and some driver packs I can support as many different computer brands and models as I want with a single vanilla Windows 7 base image. All customization and automation is done with answer files using the &lt;a href="http://en.wikipedia.org/wiki/Windows_Automated_Installation_Kit"&gt;Windows Automated Installation Kit&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;When creating an automated deployment environment, one thing you definitely don't want to be doing is having to enter each individual Windows product key as found on the sticker somewhere on the chasis. You want a single key, embedded in the deployment image or script and run with that, or some other solution. Your goal must be to do away with manual product key input and activation.&lt;/p&gt;
&lt;p&gt;This is not a problem, but here we have to introduce the topic of licences, especially client licences such as Windows 7. There are only two flavors of Windows:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Retail - this licence is most expensive but allows you to transfer it from one computer to the other one.&lt;/li&gt;
&lt;li&gt;OEM - this licence cost you less but is tied to that particular computer.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The important thing for a system administrator to know is this: when buying OEM, you do &lt;em&gt;not&lt;/em&gt; have rights to create disk images or do something similar with these computers. You cannot use Windows Deployment Services, You cannot use cloning tools or other solutions.&lt;/p&gt;
&lt;h3&gt;Volume licensing&lt;/h3&gt;
&lt;p&gt;Now it is time to talk about volume licensing. A volume licence is an &lt;em&gt;upgrade&lt;/em&gt; of a Retail or OEM license. So one thing is sure: you must order every computer with an OEM Windows licence, regardless of your plans. In addition, the volume licence, you have to buy separately, you gain 'reimaging rights'.&lt;/p&gt;
&lt;p&gt;Now comes the fun part. You only need one (1) Volume Licence for a specific product to be eligible to image or automatically deploy &lt;em&gt;all&lt;/em&gt; PCs running that particular operating system (32 bit or 64 bit doesn't matter). &lt;/p&gt;
&lt;h3&gt;KMS or MAK activation&lt;/h3&gt;
&lt;p&gt;With a volume licence, client's don't need to activate with Microsoft through the internet. For larger organisations, that would cause too much internet traffic. Instead, you use a local activation service within your network. You can either deploy a &lt;a href="http://technet.microsoft.com/library/ff793434.aspx"&gt;KMS (Key Management Service)&lt;/a&gt; or use the &lt;a href="http://technet.microsoft.com/en-US/library/ff793435.aspx"&gt;Volume Actication Management Tool (VAMT)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Most people may want the KMS service. But a KMS service only starts to validate clients with 25 or more PCs. If you have less than that number of clients, you may resort to MAK validation.&lt;/p&gt;
&lt;p&gt;When choosing KMS activation, you install a KMS service on one of your Windows servers and that host will then act as an activation server within your organisation. Systems activated through the KMS must periodically revalidate themselves (as like every 6 months / 180 days). but how do the clients know that they should validate against your KMS? And which product key do you use?&lt;/p&gt;
&lt;p&gt;If you buy a volume licence, you will get access to a special ISO image of Windows 7, Vista Business or XP Professional. You also gain access to a special product key, a KMS product key. (Please note that you must by a volume licence for each operating system product version).&lt;/p&gt;
&lt;p&gt;You use this special KMS product key to activate the KMS server. This happens only once. So this one time, you activate the KMS server with Microsoft, after that, no communication occurs with clients or the KMS service with Microsoft.&lt;/p&gt;
&lt;p&gt;That special ISO image you got contains a special Windows version that &lt;em&gt;does not require a product key&lt;/em&gt;. Once a client is installed, it just searches your network for a KMS server through DNS and tries to activate against it. Once validated, clients stay validated as long as they get in contact twice a year (180 days) with your KMS service.&lt;/p&gt;
&lt;p&gt;If you have less than 25 PCs, you will use the &lt;a href="http://technet.microsoft.com/en-US/library/ff793435.aspx"&gt;MAK activation and the VAMT tool&lt;/a&gt;. Clients can either activate through Microsoft directly or through the VAMT tool. The VAMT tool collects activation requests within your network like a KMS, however, it does contact Microsoft to validate those activations. And there is a limited number of activations you are entitled to. This VAMT tool can cache activation requests so you can redeploy or re-image systems and reactivate them without seeing your activation limit getting reached. &lt;/p&gt;
&lt;p&gt;I hope this information was useful to you and if you've discovered a mistake, please comment.&lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Improving web application security by implementing database security</title><link href="https://louwrentius.com/improving-web-application-security-by-implementing-database-security.html" rel="alternate"></link><published>2012-05-18T01:00:00+02:00</published><updated>2012-05-18T01:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-05-18:/improving-web-application-security-by-implementing-database-security.html</id><summary type="html">&lt;p&gt;Security is about defense-in-depth. It bogles my mind why it is so difficult to implement defense-in-depth security in web applications. 99.9% of applications use a single database account, with root-like privileges. Easiest for the developer of course, and the database is just a data store. It is not understood …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Security is about defense-in-depth. It bogles my mind why it is so difficult to implement defense-in-depth security in web applications. 99.9% of applications use a single database account, with root-like privileges. Easiest for the developer of course, and the database is just a data store. It is not understood for what it really is. Your database is your only and last defensive layer that you have before the attacker compromises your data. Use it well. &lt;/p&gt;
&lt;p&gt;For example, you can use your database to protect you against high-impact attacks such as SQL-injection.&lt;/p&gt;
&lt;p&gt;I created a presentation about this topic a while ago You can download this presentation here: &lt;/p&gt;
&lt;p&gt;&lt;a href="http://mini.louwrentius.com/static/files/designingsecureapplications.pdf"&gt;http://mini.louwrentius.com/static/files/designingsecureapplications.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A short summary of the points made. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Truly understand your application and their requirements. &lt;/li&gt;
&lt;li&gt;Do not create a monolithic application, create separate applications. For example, at least separate front office and back office. &lt;/li&gt;
&lt;li&gt;Run those applications under different operating system users or ideally on different servers, residing in different network segments.&lt;/li&gt;
&lt;li&gt;It suddenly makes sense to put your database server in a separate secure network segment as opposed to running it on the same box as the application server.&lt;/li&gt;
&lt;li&gt;Do not use a single database account with root-like privileges.&lt;/li&gt;
&lt;li&gt;Create separate database accounts for separate application components. Only assign those privileges required for that application. White-list privileges within the database. This is key.&lt;/li&gt;
&lt;li&gt;Understand that for end-user authentication, 'select username,password from user' kinda privs is not required!&lt;/li&gt;
&lt;li&gt;Use stored procedures and functions wisely. By only providing access to functions, views and stored procedures, while preventing access to tables, you can significantly reduce the impact of SQL-injection or other application level security breaches. &lt;/li&gt;
&lt;li&gt;In any case, understand that an attacker can never obtain more database privileges than the database account used. Even if the entire application server is compromised. This is especially important for your internet-facing applications.&lt;/li&gt;
&lt;li&gt;Use your database as an extra layer of defense.&lt;/li&gt;
&lt;/ul&gt;</content><category term="Security"></category></entry><entry><title>Why security is all about defense in depth</title><link href="https://louwrentius.com/why-security-is-all-about-defense-in-depth.html" rel="alternate"></link><published>2012-03-24T00:00:00+01:00</published><updated>2012-03-24T00:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-03-24:/why-security-is-all-about-defense-in-depth.html</id><summary type="html">&lt;p&gt;Many people asume that if you regularly update your computer, you are safe from hackers. But nothing could be further from the truth. Keeping your systems up-to-date only protects you against exploits for publicly known vulnerabilities.&lt;/p&gt;
&lt;p&gt;Your systems are still not protected against privately known vulnerabilities and if hackers have …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Many people asume that if you regularly update your computer, you are safe from hackers. But nothing could be further from the truth. Keeping your systems up-to-date only protects you against exploits for publicly known vulnerabilities.&lt;/p&gt;
&lt;p&gt;Your systems are still not protected against privately known vulnerabilities and if hackers have zero-day exploits for such vulnerabilities, you are clearly having a false sense of security.&lt;/p&gt;
&lt;p&gt;There couldn't be a better example than a high-risk vulnerability &lt;a href="http://technet.microsoft.com/en-us/security/bulletin/ms12-020"&gt;MS12-020&lt;/a&gt; regarding the Microsoft Remote Desktop Protocol interface, as present on TCP-port 3389. Any unpatched Microsoft Windows-based server or desktop system can be compromised through this vulnerability. If the system is vulnerable and TCP-port 3389 is accessible, it is over. Your data is compromised.&lt;/p&gt;
&lt;p&gt;Now, how many people knew about this vulnerability and for how long? &lt;/p&gt;
&lt;p&gt;As we speak, someone may be reading these very words on your computer, just remotely, because of an undisclosed, unknown vulnerability. That sounds like paranoia, but it isn't. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/zeroday.png"&gt;&lt;img alt="small" src="https://louwrentius.com/static/images/zeroday-small.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Zero-day exploit market&lt;/h3&gt;
&lt;p&gt;There is a whole &lt;a href="http://www.forbes.com/sites/andygreenberg/2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-exploits/"&gt;zero-day exploit market&lt;/a&gt;. Exploits are sold at enormous prices, as high as $100.000+ dollars. Only those who have the means (money) and a need for them will pay such prices. Buyers often tend to be government agencies and such.&lt;/p&gt;
&lt;p&gt;There is no doubt in my mind that the computer I'm currently working on is affected by high-risk vulnerabilities I don't know of. It is very likely that for some of them, exploits exist. But look at the risk: who is going to spend a $100.000 exploit on me? But is the intelectual property of your company worth that much? Might sound way more realistic already, doesn't it?&lt;/p&gt;
&lt;p&gt;You may hope that zero-day exploits are sold to trustworthy governments, but the marked is free. Anyone with sufficient means can buy them. Some sellers may scrutinize to whom they sell, but others?&lt;/p&gt;
&lt;p&gt;This whole zero-day exploit market is a problem. Exploit-sellers have nothing to gain and only to loose from public disclosure of the vulnerability. As long it is undiscovered, it can be used by buyers. All parties involved in this market benefit from keeping systems insecure. From keeping systems unpatched.&lt;/p&gt;
&lt;p&gt;So instead of informing the vendor of a security vulnerability so the public can be protected, knowledge of the vulnerability is sold to the highest bidder who then does who knows what with it.&lt;/p&gt;
&lt;p&gt;For most organisations and people, the upside is that nobody will spend a $100.000 on you if you're not worth it. The reason is that every time an exploit is used, it can be discovered, rendering the exploit useless once a security patch is released. &lt;/p&gt;
&lt;h3&gt;Protecting against zero-day exploits&lt;/h3&gt;
&lt;p&gt;The question is then what to do against this kind of threat. What can you do to protect yourself against the risk of zero-day exploits if you perceive the risk as realistic towards your organisation.&lt;/p&gt;
&lt;p&gt;The answer is a security strategy of defense in depth. It is not a solution that ends all problems, but it decreases the risk that your organisation gets compromised. It is about trying to diminishing risk to acceptable levels.&lt;/p&gt;
&lt;p&gt;Assume that you will get compromised. Then, think about what can be done to reduce the impact of the hack. Will only one server get hacked, or the entire internal company network? &lt;/p&gt;
&lt;p&gt;Defense in depth is the principle that you do not rely on one single security measure to protect systems and services from a compromise. There are many ways to implement such a strategy and I will name a few. &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Only expose those services towards the internet that are required for production.&lt;/li&gt;
&lt;li&gt;Make sure you have proper network segmentation in place, systems should not provide a stepping stone for an attacker to enter your internal company network.&lt;/li&gt;
&lt;li&gt;Never expose management interfaces such as RDP towards the internet directly, use an additional security layer (white list IP address or use VPN).&lt;/li&gt;
&lt;li&gt;Establish an emergency patch-policy to make sure that all systems are patched outside regular maintenance windows if high-risk vulnerabilities are reported.&lt;/li&gt;
&lt;li&gt;Monitor the heck out of your environment. Carefully try to log and alert to those events that may indicate a security breach.&lt;/li&gt;
&lt;li&gt;Audit your systems, regularly check for misconfigurations and resolve them.&lt;/li&gt;
&lt;li&gt;Select hardware and software vendors based on their security track record.&lt;/li&gt;
&lt;li&gt;Use different vendors and brands for different defensive layers. &lt;/li&gt;
&lt;li&gt;Consider internet off-limits for end-user systems processing sensitive information&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Software is vulnerable so prepare for the worst.&lt;/p&gt;</content><category term="Security"></category></entry><entry><title>Example of a home networking setup with VLANs</title><link href="https://louwrentius.com/example-of-a-home-networking-setup-with-vlans.html" rel="alternate"></link><published>2012-02-05T09:00:00+01:00</published><updated>2012-02-05T09:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-02-05:/example-of-a-home-networking-setup-with-vlans.html</id><summary type="html">&lt;p&gt;Updated October 24, 2012, see below.&lt;/p&gt;
&lt;p&gt;This post is a description of my home network setup based on gigabit ethernet. I did a non-standard trick with VLANs that may also be of interest to other people. I'm going to start with a diagram of the network. Just take a look …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Updated October 24, 2012, see below.&lt;/p&gt;
&lt;p&gt;This post is a description of my home network setup based on gigabit ethernet. I did a non-standard trick with VLANs that may also be of interest to other people. I'm going to start with a diagram of the network. Just take a look (click to enlarge).&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/home-network.png"&gt;&lt;img alt="home network" src="https://louwrentius.com/static/images/home-network-small.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I now have replaced my Mac Mini with a HP N40L router based on Ubuntu 12.04 LTS. This server is now placed in the basement. The managed netgear switch is swapped with the Airport extreme.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/homenetwork.png"&gt;&lt;img alt="home network" src="https://louwrentius.com/static/images/homenetwork-small.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Design&lt;/h3&gt;
&lt;p&gt;I have a Mac mini running Linux that acts as my internet router. The closet that houses the cable modem is not a friendly environment for such a device and there is not a good location for it. The closet is also outside of my house, behind a door not too well protected. So this is why I keep my router inside my house. &lt;/p&gt;
&lt;p&gt;From this closet, one UTP cable terminates in the living room, the other in the basement. This configuration has a very big problem. How do I run two different networks over one wire?&lt;/p&gt;
&lt;p&gt;I have to connect my iMac to my 'internal' home network. However, the Mac mini must be connected to both the internet network segment (connected to the cable modem) and the home network. All through a single UTP cable. &lt;/p&gt;
&lt;p&gt;Therefore I use VLANs. I transport both the internet network and the local home network though one cable. VLAN 10 is for internet, VLAN 20 for my local home network. For this all to work you need managed switches that support 802.1q.&lt;/p&gt;
&lt;h3&gt;How traffic flows&lt;/h3&gt;
&lt;p&gt;So let's say that the server is accessing the internet to obtain the latest Linux security updates. How does this network traffic flow through the infrastructure (click to enlarge)?&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/home-network-traffic.png"&gt;&lt;img alt="network flow" src="https://louwrentius.com/static/images/home-network-traffic-small.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;All internet traffic must flow through the router. Thus, even if the traffic from the basement travels through the switch next to the cable modem, it must first travel to the router in the living room. There the router decides if the traffic is permitted to go out to the internet and thus enter the internet VLAN. &lt;/p&gt;
&lt;h3&gt;Pros and cons&lt;/h3&gt;
&lt;p&gt;Pros: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Just a single cable to the living room&lt;/li&gt;
&lt;li&gt;no extra USB-based ethernet adapters required for the Mac mini&lt;/li&gt;
&lt;li&gt;Mac mini resides in save and computer friendly environment&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Cons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Managed switches supporting VLANs are relatively expensive&lt;/li&gt;
&lt;/ul&gt;</content><category term="Networking"></category></entry><entry><title>Linux Iptables Firewall Script released on Google code</title><link href="https://louwrentius.com/linux-iptables-firewall-script-released-on-google-code.html" rel="alternate"></link><published>2012-01-08T20:00:00+01:00</published><updated>2012-01-08T20:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2012-01-08:/linux-iptables-firewall-script-released-on-google-code.html</id><summary type="html">&lt;p&gt;I have released &lt;a href="http://code.google.com/p/lifs/"&gt;LIFS, the Linux Iptables Firewall Script&lt;/a&gt;. This script allows you to setup a firewall within minutes. It is easy to use, yet very powerful. It uses Iptables and even improves upon some limitations of Iptables.&lt;/p&gt;
&lt;p&gt;Every person who has to maintain some kind of Iptables-based firewall should …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I have released &lt;a href="http://code.google.com/p/lifs/"&gt;LIFS, the Linux Iptables Firewall Script&lt;/a&gt;. This script allows you to setup a firewall within minutes. It is easy to use, yet very powerful. It uses Iptables and even improves upon some limitations of Iptables.&lt;/p&gt;
&lt;p&gt;Every person who has to maintain some kind of Iptables-based firewall should really look into LIFS. It will make managing your firewall much more convenient.&lt;/p&gt;
&lt;p&gt;For more advanced purposes. LFS allows you to create object groups. These are groups of individual hosts, networks or services (tcp/udp). &lt;/p&gt;
&lt;p&gt;Look at this example of object groups in action. Read and understand.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;HTTP_SERVICES=&amp;quot;
    80/tcp
   443/tcp
&amp;quot;

WEB_SERVER_1=192.168.0.10
WEB_SERVER_2=192.168.0.11

WEB_SERVERS=&amp;quot;
    $WEB_SERVER_1
    $WEB_SERVER_2
&amp;quot;

allow_in any &amp;quot;$WEB_SRVERS&amp;quot; any &amp;quot;$HTTP_SERVICES&amp;quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;As you can see, a single firewall rule in fact creates 4 rules, one for each host and port. This functionality can be found in commercial based firewalls but it is not build-in into Iptables. LIFS fixes this.&lt;/p&gt;
&lt;p&gt;LIFS is a continuation of &lt;a href="http://code.google.com/p/lfs/downloads/list"&gt;LFS&lt;/a&gt;, which has been discontinued.&lt;/p&gt;</content><category term="Networking"></category></entry><entry><title>Neato XV-15 / XV-11 Robotic Vacuum cleaner review</title><link href="https://louwrentius.com/neato-xv-15-xv-11-robotic-vacuum-cleaner-review.html" rel="alternate"></link><published>2011-12-25T09:00:00+01:00</published><updated>2011-12-25T09:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-12-25:/neato-xv-15-xv-11-robotic-vacuum-cleaner-review.html</id><summary type="html">&lt;p&gt;Update 18 February 2012&lt;/p&gt;
&lt;p&gt;There is one problem. When the robot is not connected to the charger, the batteries are depleted very fast. Even if the batteries are not entirely depleted and the robot can still display the menu, the clock loses it's time. Every time the robot gets a …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Update 18 February 2012&lt;/p&gt;
&lt;p&gt;There is one problem. When the robot is not connected to the charger, the batteries are depleted very fast. Even if the batteries are not entirely depleted and the robot can still display the menu, the clock loses it's time. Every time the robot gets a too low charge, you have to set the date and the time, which is a bit of a hassle. This does not happen often though. The robot seems to be consistently operating properly. &lt;/p&gt;
&lt;p&gt;Update 1 March 2012&lt;/p&gt;
&lt;p&gt;It seems that the batteries have worsened so bad that the device cannot clean my living room without 3x recharging. I have to return the product for repair.
I had the device scheduled to clean every other day, about 4x per week. &lt;/p&gt;
&lt;p&gt;Update 20 March 2012&lt;/p&gt;
&lt;p&gt;I received a brand new device that is now charging. I hope this one will last longer.&lt;/p&gt;
&lt;p&gt;Update 23 March 2012&lt;/p&gt;
&lt;p&gt;It seems that the brand new robot is also flawed, it just goes nuts. Seems to be up-to-date regarding software, so have to return this one also. (read below!)&lt;/p&gt;
&lt;p&gt;Update 31 March 2012&lt;/p&gt;
&lt;p&gt;I did not return this device and did some additional cleaning cycles. All cylces where performed withouth problems. The device choked om some cloth and some cables I forgot to cleanup, but it does seem to operate properly. So I will keep it.&lt;/p&gt;
&lt;p&gt;Uodate 4 May 2012&lt;/p&gt;
&lt;p&gt;Still works like a charm. I'm currently very hapy with it. If the batteries hold up, this device is really worth the money.&lt;/p&gt;
&lt;p&gt;Uodate 8 June 2012&lt;/p&gt;
&lt;p&gt;I had some critical battery errors and contacted support. They asked me to
check if the batteries are connected properly. So I just pushed on the connectors to make sure they are firmly connected. After that, I didn't see
any more battery errors and the device is still cleaning like a charm.&lt;/p&gt;
&lt;p&gt;Original article:&lt;/p&gt;
&lt;p&gt;So I bought a robotic vacuum cleaner. The first question is 'why would you spend some serious money on such a device? On a toy?'. I have some rationalisations for buying this device, but honestly, one reason is that sometimes I just like to buy a new toy. Something to play with. Excuse me for being human. In this blog post I want to explain to you why I bought a Neato XV-15 and not another product.&lt;/p&gt;
&lt;p&gt;Now I did say that I have some rationalisations, so let's start. One rationalisation is that I hate vacuum cleaning. Since I have two cats, vacuum cleaning once a week may not be enough. And I'm not going to clean more frequently. So you can accept it or if you can spare a little dough, buy a robotic vacuum cleaner that cleans your house when you're not at home.&lt;/p&gt;
&lt;p&gt;So let's introduce the Neato XV-15.&lt;/p&gt;
&lt;h3&gt;The Neato XV-15 Vacuum cleaning robot&lt;/h3&gt;
&lt;p&gt;The XV-15 robot is made by &lt;a href="http://www.neatorobotics.com/"&gt;Neato Robotics&lt;/a&gt;, a young startup that seems to be started purely for this device. The company started with the XV-11 for the US market, and the XV-15 is identical except that it is meant for the European market. A new &lt;a href="http://www.engadget.com/2011/10/11/neatos-xv-12-robot-vacuum-cleans-your-floors-dressed-in-white-f/"&gt;XV-12&lt;/a&gt; has also been announced, which seems to be identical to the other two machines, except for the color (white).&lt;/p&gt;
&lt;p&gt;The robot automatically vacuums your house while you're away or minding your own business. I't can't do anything else, but not having to vacuum all the time is kinda cool, right?&lt;/p&gt;
&lt;p&gt;I bought the XV-15 in The Netherlands for 500 euros. The XV-11 can be had for around $400 excluding taxes or maybe even for less at Amazon. Not very cheap, but competitively priced compared to other robots on the market.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Neato XV-15" src="https://louwrentius.com/static/images/xv-15-01.png" /&gt;&lt;/p&gt;
&lt;h3&gt;How the robot works&lt;/h3&gt;
&lt;p&gt;The XV-15 has a rubber brush at the front that rotates quite fast and that brush scoops up the dirt. Just behind the rubber brush, a vacuum mouth is present. Anything sucked up through that mouth enters the dustbin. The actual vacuum motor is at the back of the dustbin, protected by the dust filter of the dustbin. The XV-15 is a true vacuum and Neato claims that vacuuming power is way stronger than any other robot on the market. Based on the noise, that may be true.&lt;/p&gt;
&lt;p&gt;On top of the XV-15 you can find an LCD screen for configuring the robot and the turret housing its special secret weapon: laser sight. This is the cool part. &lt;/p&gt;
&lt;p&gt;The XV-15 has a laser system mounted on top that allows the robot to locate objects and walls. It is capable of creating a map of its surroundings. Anything the laser can 'see' will be avoided. The robot will not bump into any objects it can see. This is in stark contrast to products like the iRobot Roomba, which just bumps into everything. The XV-15 does have a front bumper though, because anything below the laser turret cannot be seen. Thus the robot does bump into things occasionally but it does a hard job trying not to.&lt;/p&gt;
&lt;p&gt;The laser system is not just for preventing collisions with furniture. Being able to generate a map of your house allows the robot to clean your house in an efficient manner. Robots like the Roomba just randomly zigzag through your house. If you do that long enough, chances are high that most of your house gets cleaned, which it will. &lt;/p&gt;
&lt;p&gt;The XV-15 only covers each spot once, and thus is able to clean your house much faster. It first cleans the perimeter of a room, hugging the walls. It then cleans the room in straight lines, like a swimmer in a pool. It remembers where it has cleaned or not and will come back later to a spot if something (like humans or pets) was occupying an area that can now be cleaned. &lt;/p&gt;
&lt;p&gt;My living-room, kitchen and entrance are cleaned in 40 minutes. An area of 40 square meters or about 420 square feet.&lt;/p&gt;
&lt;object width="470" height="315"&gt;&lt;param name="movie" value="http://www.youtube.com/v/p2jvRKzQP0M?version=3&amp;amp;hl=en_US"&gt;&lt;/param&gt;&lt;param name="allowFullScreen" value="true"&gt;&lt;/param&gt;&lt;param name="allowscriptaccess" value="always"&gt;&lt;/param&gt;&lt;embed src="http://www.youtube.com/v/p2jvRKzQP0M?version=3&amp;amp;hl=en_US" type="application/x-shockwave-flash" width="470" height="315" allowscriptaccess="always" allowfullscreen="true"&gt;&lt;/embed&gt;&lt;/object&gt;

&lt;p&gt;When you see the XV-15 doing it's job, you may tend to stare at it longer than you may want to. It's just fascinating to see the device effortlessly navigating around your house. And it doesn't need stuff like battery operated 'light houses' like the Roomba's need. It is truly autonomous except for emptying the dust bin.&lt;/p&gt;
&lt;p&gt;The XV-15 seems to divide the rooms it detects in parts and will start cleaning those parts one after another. As said earlier, the robot will continue cleaning where it had left off if the batteries are low and needs recharging.&lt;/p&gt;
&lt;p&gt;The robot has no problem detecting stairs. Neato has also provided a roll of magnetic strip that can be used as a boundary marker. The robot will not cross this strip and will clean around it.&lt;/p&gt;
&lt;p&gt;However, how smart the XV-15 may be, you need to make your house robot-proof. The first time you start cleaning with the Neato, it is advised to monitor it's progress and 'fix' difficult spots in your house. I have no experience with other robots, but I think that this is true for all of them.&lt;/p&gt;
&lt;p&gt;The robot is just low enough that it can clean underneath my central heating radiators, which is very nice. It also has no trouble cleaning under my bed, an area which seems to collect dust very fast.&lt;/p&gt;
&lt;p&gt;The robot has never had any problems finding the base. It gently wiggles it's behind towards the base until it has a connection. It then informs you with a sound that it has finished cleaning.&lt;/p&gt;
&lt;h3&gt;Docking station&lt;/h3&gt;
&lt;p&gt;The XV-15 comes with a docking station that allows the device to automatically recharge for the next run. The XV-15 will return to the docking station if the batteries are low. When recharged, the XV-15 will continue cleaning where it left off. If you have a single story apartment, the XV-15 will thus clean the entire apartment all by itself, even if it can't clean your home in one take on a single battery charge. After recharging, the unit will just return to the spot where it aborted cleaning to recharge and continue cleaning. &lt;/p&gt;
&lt;p&gt;&lt;img alt="Neato XV-15" src="https://louwrentius.com/static/images/xv-15-02.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The docking station allows you to put excess power cord into the station itself, to keep cable clutter to a minimum. You can also reroute the cable to exit the station from either the left side or right side.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Neato XV-15" src="https://louwrentius.com/static/images/xv-15-04.jpg" /&gt;&lt;/p&gt;
&lt;h3&gt;Scheduling&lt;/h3&gt;
&lt;p&gt;The robot can start cleaning with a press of the big orange button. The robot will start cleaning and return to the docking station when finished. Ideally, you want to have the robot clean the house when you're not around. Fortunately you can set a schedule for all seven days of the week. &lt;/p&gt;
&lt;p&gt;The robot has a clear LCD screen with a very easy menu for setting the clock and entering a schedule. A few simple buttons allows you to enter a schedule, which probably has to be done once. I have it set to clean every other day except for the weekend. &lt;/p&gt;
&lt;p&gt;Scheduling is extremely simple: for all seven days of the week, you can configure a start time or choose not to clean that day. That's all.&lt;/p&gt;
&lt;h3&gt;Noise level&lt;/h3&gt;
&lt;p&gt;When you start the XV-15 for the first time, you will be surprised by the of noise this little device generates. The vacuum motor is loud, but the rubber brush adds an additional roaring and rattling sound to it that is just almost unbearable. &lt;/p&gt;
&lt;p&gt;The rubber brush keeps hitting the floor causing the loud rattling sound. I had to add some felt strips on the bottom to raise the robot a little bit from the ground. This eliminated the rattling, but the robot is very loud. Keep this in mind. &lt;/p&gt;
&lt;p&gt;I think the noise level is the biggest downside of this robot.&lt;/p&gt;
&lt;h3&gt;Cleaning performance&lt;/h3&gt;
&lt;p&gt;The picture shows what the XV-15 can collect during a sweep. I dit not perform any scientific tests to verify the cleaning performance of the robot, but any visible dirt is always devoured by the robot. I'm personally very pleased with the results.&lt;/p&gt;
&lt;p&gt;&lt;img alt="dirt" src="https://louwrentius.com/static/images/xv-15-06.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I found a &lt;a href="http://www.robotsaldetalle.es/review/neato-xv-15/#prueba%20de%20eficiencia"&gt;source&lt;/a&gt; written in Italian that seems to suggest that the XV-15 does a significantly worse job of cleaning stuff (67%) than the Roomba 780 (97%) robot, but it is an artificial test that does not use the stuff it is supposed to clean: (fine) dust and hair. However, it thus may be possible that the dumb Roomba's clean better. I don't know. &lt;/p&gt;
&lt;p&gt;I only can tell you that even if you clean daily and you have some pets, you will find quite some stuff inside the dustbin after each run.&lt;/p&gt;
&lt;h3&gt;Maintenance&lt;/h3&gt;
&lt;p&gt;The iRobot Roomba range of products seem to require quite some maintenance. The biggest issue with the Roombas is the fact that you need to clean out hair from the bearings and brushes after each run. This is not necessary with the XV-15.&lt;/p&gt;
&lt;p&gt;I don't know how much time cleaning of a Roomba takes, but I have an issue with that: why bother with a robot if you have to clean the robot instead of the house itself? Yes cleaning the robot takes less time, but it's probably no fun either.&lt;/p&gt;
&lt;p&gt;The only thing that you need to do when the XV-15 is finished: empty the dustbin and clean the filter. That will take no longer than 30 seconds I guess. No need to clean up the brush or bearings. It is of course advised to inspect the brush and bearings now and then. &lt;/p&gt;
&lt;p&gt;&lt;img alt="XV-15" src="https://louwrentius.com/static/images/xv-15-05.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Checking the condition of the rubber brush and bearings is very easy. The brush guard can be removed without tools in seconds. Removing the rubber brush is just as easy and cleaning the axles shouldn't take long if ever required. I've never had to clean the brush itself. It seems that hair gets sucked up and doesn't stick to the brush. &lt;/p&gt;
&lt;h3&gt;Inside the box&lt;/h3&gt;
&lt;p&gt;The XV-15 comes with an additional rubber brush and four additional filters. According to Neato, you need to replace the filter every three to six months, depending on the frequency of your cleaning schedule. At 16 euros ($20) for 4 filters, that's not a big deal I guess.&lt;/p&gt;
&lt;p&gt;I couldn't find any details on how long the rubber brush will last.&lt;/p&gt;
&lt;h3&gt;Updating the software&lt;/h3&gt;
&lt;p&gt;If you take a closer look at the back of the robot, you will notice that at the left side of the big exhaust vent, two small ports are present: for power (only useful if you do not use the docking station) and a USB port. The USB port can be used to update the software of the robot to the latest version.&lt;/p&gt;
&lt;p&gt;&lt;img alt="Neato XV-15" src="https://louwrentius.com/static/images/xv-15-03.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;Please note that Neato does not suply a USB cable so you need to get a mini USB cable when you want to update the software (firmware) of the robot. Bad news for Apple and Linux users: the firmware update software only runs on Windows. You can update the robot from Windows running inside VMware (Workstation or Fusion).&lt;/p&gt;
&lt;p&gt;Take a look at &lt;a href="http://www.neatorobotics.com/support/neato-software-updates"&gt;Neato's update page&lt;/a&gt; to see if new updates are available.&lt;/p&gt;
&lt;h3&gt;Hacking the XV-15&lt;/h3&gt;
&lt;p&gt;When the robot is connected to the computer through USB, you can communicate with the device through Hyperterminal or Minicom. If you like hacking your robot, continue reading &lt;a href="http://random-workshop.blogspot.com/2010/12/communicating-with-xv-11.html"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;I'm quite happy with the robot. The biggest question is how long this device will last. At first, a robot like this seems a bit as a toy and it may be, but it is a pretty darn useful one. &lt;/p&gt;
&lt;p&gt;The lack of maintenance compared to the other robots is a big plus to me. If you have to spend time on cleaning the robot itself, where is the benefit?&lt;/p&gt;
&lt;p&gt;To me, the only downside is the noise. &lt;/p&gt;
&lt;p&gt;It can't vacuum the stairs. It can't vacuum in every corner. But the device can clean the majority of your house more often than you would probably have done yourself.&lt;/p&gt;
&lt;h3&gt;Additional sources&lt;/h3&gt;
&lt;p&gt;&lt;a href="http://www.robotreviews.com/chat/viewforum.php?f=20&amp;amp;sid=31179a9933b98c277efe8e0b049a4c25"&gt;Robot Reviews&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Very cool &lt;a href="http://www.youtube.com/watch?v=g8gDB08rnGE&amp;amp;feature=related"&gt;youtube film&lt;/a&gt; showing the robot through 'near infrared' view.&lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Speeding up Linux MDADM RAID array rebuild time using bitmaps</title><link href="https://louwrentius.com/speeding-up-linux-mdadm-raid-array-rebuild-time-using-bitmaps.html" rel="alternate"></link><published>2011-12-22T21:00:00+01:00</published><updated>2011-12-22T21:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-12-22:/speeding-up-linux-mdadm-raid-array-rebuild-time-using-bitmaps.html</id><summary type="html">&lt;hr&gt;
&lt;p&gt;Update 2020: Please &lt;a href="https://louwrentius.com/the-impact-of-the-mdadm-bitmap-on-raid-performance.html"&gt;beware of the impact&lt;/a&gt; of random write I/O performance. &lt;/p&gt;
&lt;p&gt;Please note that with a modern Linux distribution, bitmaps are enabled by default. They will not help speed up a rebuild after a failed drive. But it will help resync an array that got out-of-sync due to …&lt;/p&gt;</summary><content type="html">&lt;hr&gt;
&lt;p&gt;Update 2020: Please &lt;a href="https://louwrentius.com/the-impact-of-the-mdadm-bitmap-on-raid-performance.html"&gt;beware of the impact&lt;/a&gt; of random write I/O performance. &lt;/p&gt;
&lt;p&gt;Please note that with a modern Linux distribution, bitmaps are enabled by default. They will not help speed up a rebuild after a failed drive. But it will help resync an array that got out-of-sync due to power failure or another intermittent cause.&lt;/p&gt;
&lt;hr&gt;

&lt;p&gt;When a disk fails or gets kicked out of your RAID array, it often takes a lot of time to recover the array. It takes 5 hours for my own array of 20 disks to recover a single drive.&lt;/p&gt;
&lt;p&gt;Wouldn't it be nice if that time can be reduced? Even to 5 seconds? &lt;/p&gt;
&lt;p&gt;Although not enabled by default, you can enable so called 'bitmaps'. As I understand it, a bitmap is basically a map of your RAID array and it charts which areas need to be resynced if a drive fails. &lt;/p&gt;
&lt;p&gt;This is great, because I have the issues that of every 30 reboots, sometimes a disk won't get recognized and the array is degraded. Adding the disk back into the array will mean that the system will be recovering for 5+ hours. &lt;/p&gt;
&lt;p&gt;I enabled Bitmaps and after adding a missing disk back into the array, the array was recovered &lt;em&gt;instantly&lt;/em&gt;. &lt;/p&gt;
&lt;p&gt;Isn't that cool?&lt;/p&gt;
&lt;p&gt;So there are two types of bitmapsL&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;internal: part of the array itself&lt;/li&gt;
&lt;li&gt;external: a file residing on an external drive outside the array&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The internal bitmap is integrated in the array itself. Keeping the bitmap up to date will probably affect performance of the array. However I didn't notice any performance degradation.&lt;/p&gt;
&lt;p&gt;The external bitmap is a file that must reside on a EXT2 or EXT3 based file system that is not on top of the RAID array. So this means that you need an extra drive for this or need to use your boot drive for this. I can imagine that this solution will have less impact on the performance of the array but it is a bit more hassle to maintain. &lt;/p&gt;
&lt;p&gt;I enabled an internal bitmap on my RAID arrays like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mdadm --grow /dev/md5 --bitmap=internal
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is all there is to it. You can configure an external bitmap like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mdadm --grow /dev/md5 --bitmap=/some/directory/somefilename
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;There probably will be some performance penalty involved, but it does not seem to affect sequential throughput, which is the only thing that is important for my particular case.&lt;/p&gt;
&lt;p&gt;For most people, I would recommend configuring an internal bitmap, unless you really know why you would have to use an external bitmap.&lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>Setting up a VPN with your iPhone using L2TP, IPSec and Linux</title><link href="https://louwrentius.com/setting-up-a-vpn-with-your-iphone-using-l2tp-ipsec-and-linux.html" rel="alternate"></link><published>2011-12-11T16:00:00+01:00</published><updated>2011-12-11T16:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-12-11:/setting-up-a-vpn-with-your-iphone-using-l2tp-ipsec-and-linux.html</id><summary type="html">&lt;p&gt;This blogpost discusses how to setup an IPSec-based VPN between your iPhone and a Linux server. &lt;/p&gt;
&lt;p&gt;Updated 16 October 2012 - now compatible with Ubuntu 12.04 LTS&lt;/p&gt;
&lt;h3&gt;IMPORTANT! (update January 2013)&lt;/h3&gt;
&lt;p&gt;I find using &lt;a href="https://louwrentius.com/blog/2013/01/setup-a-vpn-on-your-iphone-with-openvpn-and-linux/"&gt;OpenVPN with the new iOS OpenVPN client&lt;/a&gt; a way better solution. OpenVPN actually restores VPN connectivity …&lt;/p&gt;</summary><content type="html">&lt;p&gt;This blogpost discusses how to setup an IPSec-based VPN between your iPhone and a Linux server. &lt;/p&gt;
&lt;p&gt;Updated 16 October 2012 - now compatible with Ubuntu 12.04 LTS&lt;/p&gt;
&lt;h3&gt;IMPORTANT! (update January 2013)&lt;/h3&gt;
&lt;p&gt;I find using &lt;a href="https://louwrentius.com/blog/2013/01/setup-a-vpn-on-your-iphone-with-openvpn-and-linux/"&gt;OpenVPN with the new iOS OpenVPN client&lt;/a&gt; a way better solution. OpenVPN actually restores VPN connectivity when returning from sleep.&lt;/p&gt;
&lt;h3&gt;Why using a VPN with your iPhone?&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;Security: all data is encrypted and cannot be read by malicious people trying to eavesdrop on your data.&lt;/li&gt;
&lt;li&gt;Performance: my subjective experience is that a VPN can speed up web browsing, it seems to reduce latency.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;I am assuming that you use: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;an iPhone as the VPN client&lt;/li&gt;
&lt;li&gt;a Debian-based Linux distro, such as Debian or Ubuntu (used here)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We will use the following software:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;openswan&lt;/li&gt;
&lt;li&gt;xl2tpd&lt;/li&gt;
&lt;li&gt;pppd&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To setup the VPN, we need to configure the following steps:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;install the software&lt;/li&gt;
&lt;li&gt;configure IPSec&lt;/li&gt;
&lt;li&gt;configure L2TP&lt;/li&gt;
&lt;li&gt;configure PPP&lt;/li&gt;
&lt;li&gt;open up the appropriate firewall Ports&lt;/li&gt;
&lt;li&gt;setup firewall rules to forward traffic between the iPhone and Internet&lt;/li&gt;
&lt;li&gt;configuring the iPhone&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This set of instructions is 90% based on instructions on &lt;a href="https://peen.net/2009/04/linux-l2tpipsec-with-iphone-and-mac-osx-clients-2/"&gt;peen.net&lt;/a&gt; made by Niels Peen (Groeten!). I borrowed some other stuff from &lt;a href="http://confoundedtech.blogspot.com/2011/09/configure-iphone-ios-to-use-ipsec-vpn.html"&gt;this blog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I use openswan for IPSec support because strongswan does not support NAT by default. I just want to use software as part of the operating system and don't like to have to maintain manually compiled versions. This is why.&lt;/p&gt;
&lt;h3&gt;Initial assumptions&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;You are using a Linux host as the VPN server&lt;/li&gt;
&lt;li&gt;The server is accessible from the internet or the appropriate UDP ports are forwarded to the box. &lt;/li&gt;
&lt;li&gt;You have full control over the box and it's firewall configuration.&lt;/li&gt;
&lt;li&gt;Your iPhone has an unfiltered internet connection. If UDP is blocked, this type of VPN is not for you.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Install the software&lt;/h3&gt;
&lt;p&gt;First, we start with installing all required software:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;apt-get install openswan xl2tpd ppp
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Configure IPSec&lt;/h3&gt;
&lt;p&gt;Now we start with configuring the software. First we start with IPSec:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;ipsec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;conf&lt;/span&gt;

&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;setup&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;nat_traversal&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;yes&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;protostack&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;netkey&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;plutostderrlog&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;tmp&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;txt&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;debugging&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nx"&gt;conn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;L2TP&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;PSK&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;authby&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;secret&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;pfs&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;no&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;rekey&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;no&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;tunnel&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;esp&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;aes128&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;sha1&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;ike&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;aes128&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;sha&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;modp1024&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;ikelifetime&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="nx"&gt;h&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;keylife&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;h&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;left&lt;/span&gt;&lt;span class="p"&gt;=&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;INTERNET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;IP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ADDRESS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;OF&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ROUTER&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;SERVER&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;leftnexthop&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="nx"&gt;defaultroute&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;leftprotoport&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;1701&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;right&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="nx"&gt;any&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;rightprotoport&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;17&lt;/span&gt;&lt;span class="o"&gt;/%&lt;/span&gt;&lt;span class="nx"&gt;any&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;rightsubnetwithin&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="m m-Double"&gt;0.0.0.0&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;add&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;dpddelay&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;dpdtimeout&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;120&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;dpdaction&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;clear&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Some notes about this configuration:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We use a secret or password for authentication. Sources on the internet seem to suggest that the iPhone cannot handle certificates.&lt;/li&gt;
&lt;li&gt;we must configure the dead peer detection rules at the bottom or else you cannot reconnect to the VPN when returning from sleep. &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We thus also need to configure an encryption secret (password) for the IPSec tunnel. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;etc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;ipsec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;secrets&lt;/span&gt;

&lt;span class="nf"&gt;%any&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nf"&gt;%any&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;PSK&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;thisismysupersecretpassword&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;It is smart to choose a strong (long) password. &lt;/p&gt;
&lt;h3&gt;Configure L2TP&lt;/h3&gt;
&lt;p&gt;Inside the directory /etc/xl2tpd you have to edit xl2tpd.conf like this: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;[global]&lt;/span&gt;
&lt;span class="na"&gt;auth file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;/etc/l2tpd/l2tp-secrets&lt;/span&gt;
&lt;span class="na"&gt;debug network&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;debug tunnel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;

&lt;span class="k"&gt;[lns default]&lt;/span&gt;

&lt;span class="na"&gt;ip range&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;10.0.1.201-10.0.1.240&lt;/span&gt;
&lt;span class="na"&gt;local ip&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;10.0.1.200&lt;/span&gt;
&lt;span class="na"&gt;require chap&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;refuse pap&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;require authentication&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;lt;ENTER SOME NAME HERE&amp;gt;&lt;/span&gt;
&lt;span class="na"&gt;ppp debug&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;
&lt;span class="na"&gt;pppoptfile&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;/etc/ppp/options.xl2tpd&lt;/span&gt;
&lt;span class="na"&gt;length bit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The "ip range" is within your internal network. It is a range outside of your DHCP-scope. The "ip range" must not include the "local ip". This IP address is dedicated to your Linux host.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; once the VPN setup is working properly &lt;strong&gt;Turn off all debugging options&lt;/strong&gt; (set them to 'no'). Otherwise, your logs will fill up very quickly because every time a packet is transmitted, this is logged.&lt;/p&gt;
&lt;h3&gt;Configure PPP&lt;/h3&gt;
&lt;p&gt;Now we must configure PPP. Edit /etc/ppp/options.xl2tpd and make it look like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nv"&gt;ipcp&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;accept&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;local&lt;/span&gt;
&lt;span class="nv"&gt;ipcp&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;accept&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;remote&lt;/span&gt;
&lt;span class="nv"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;dns&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt;ADDRESS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;OF&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;LOCAL&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;OR&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;REMOTE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;DNS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;SERVER&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nv"&gt;noccp&lt;/span&gt;
&lt;span class="nv"&gt;auth&lt;/span&gt;
&lt;span class="nv"&gt;crtscts&lt;/span&gt;
&lt;span class="nv"&gt;idle&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1800&lt;/span&gt;
&lt;span class="nv"&gt;mtu&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1410&lt;/span&gt;
&lt;span class="nv"&gt;mru&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1410&lt;/span&gt;
&lt;span class="nv"&gt;nodefaultroute&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;
&lt;span class="nv"&gt;debug&lt;/span&gt;
&lt;span class="nv"&gt;lock&lt;/span&gt;
&lt;span class="nv"&gt;proxyarp&lt;/span&gt;
&lt;span class="k"&gt;connect&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;delay&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Note that you must enter a valid DNS server that must be reachable by the VPN client (iPhone) through the tunnel. &lt;/p&gt;
&lt;p&gt;We are almost there. Now we must also configure a password for the PPP connection. Edit /etc/ppp/chap-secrets and make it look like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;*&lt;/span&gt; * thisissomesecretpassword *
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This password is not related to the IPSec password. I think it is wise to configure different passwords for IPSec and PPP. &lt;/p&gt;
&lt;h3&gt;Configuring the firewall&lt;/h3&gt;
&lt;p&gt;An IPSec + L2TP + PPP VPN requires the following ports to be opened:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;500/udp&lt;/li&gt;
&lt;li&gt;4500/udp&lt;/li&gt;
&lt;li&gt;1701/udp&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You must open these ports in your firewall yourself. &lt;/p&gt;
&lt;h3&gt;Configuring traffic forwarding rules&lt;/h3&gt;
&lt;p&gt;If you use a Linux box with IPtables, you may already have a functioning configuration. However, this line is required for traffic forwarding to work:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADE
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You must replace the correct IP addresses according to your configuration. You may also have to enable traffic forwarding like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;echo 1 &amp;gt; /proc/sys/net/ipv4/ip_forwarding
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;A detailed firewall configuration guide is outside the scope of this tutorial. &lt;/p&gt;
&lt;p&gt;If you use IPtables for your local firewall, you may be interested in my &lt;a href="http://code.google.com/p/lfs"&gt;"Linux Firewall script"&lt;/a&gt; (shameless plug alert).&lt;/p&gt;
&lt;h3&gt;Configuring the iPhone&lt;/h3&gt;
&lt;p&gt;To configure a VPN profile, goto settings -&amp;gt; general -&amp;gt; network -&amp;gt; vpn (at the bottom). Choose 'Add VPN Configuration..."&lt;/p&gt;
&lt;p&gt;&lt;img alt="ipsec iphone config" src="https://louwrentius.com/static/images/ipsec01.png" /&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Enter a description&lt;/li&gt;
&lt;li&gt;Enter the IP address or DNS name of your Linux box.&lt;/li&gt;
&lt;li&gt;The 'account' field can be anything you like.&lt;/li&gt;
&lt;li&gt;Leave RSA SecurID off.&lt;/li&gt;
&lt;li&gt;The Password is the PPP password configured in /etc/ppp/chap-secrets&lt;/li&gt;
&lt;li&gt;The IPSec secret (/etc/ipsec.secrets) goes into the 'Secret' field.&lt;/li&gt;
&lt;li&gt;Keep 'Send All Traffic' enabled. &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If the connection succeeds, a VPN symbol will show up in the iPhone status bar. All traffic from then on will flow through the VPN.&lt;/p&gt;
&lt;p&gt;It may not immediately work. Look in /var/log/auth.log and /var/log/daemon.log for debug messages.&lt;/p&gt;
&lt;p&gt;Once it is working properly, disable all debug settings in xl2tpd.conf and restart the daemon.&lt;/p&gt;
&lt;h3&gt;Final remarks&lt;/h3&gt;
&lt;p&gt;You may have to tweak the 'dead peer detection' within the IPSec configuration. When the iPhone comes out of sleep, the VPN connection cannot be reinitiated right away, which is inconvenient.&lt;/p&gt;
&lt;p&gt;Also, I'm not sure what the impact is on battery life.&lt;/p&gt;</content><category term="Security"></category></entry><entry><title>Is there an easy and secure way to transfer files?</title><link href="https://louwrentius.com/is-there-an-easy-and-secure-way-to-transfer-files.html" rel="alternate"></link><published>2011-09-17T09:00:00+02:00</published><updated>2011-09-17T09:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-09-17:/is-there-an-easy-and-secure-way-to-transfer-files.html</id><summary type="html">&lt;p&gt;Many organisations just assume that the local physical network is trusted. That their network equipment is physically secure and that it is impossible for an attacker to get on the wire and start eavesdropping on network traffic.&lt;/p&gt;
&lt;p&gt;Many organisations do not seem too concerned about a very old vulnerability regarding …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Many organisations just assume that the local physical network is trusted. That their network equipment is physically secure and that it is impossible for an attacker to get on the wire and start eavesdropping on network traffic.&lt;/p&gt;
&lt;p&gt;Many organisations do not seem too concerned about a very old vulnerability regarding Ethernet-based networks called ARP-poisoning. Basically ARP-poisoning means that an attacker-controlled system steals the identity of another legitimate server, thus drawing all network traffic away from the legitimate server to the attacker-controlled system. Then, the attacker can do with that traffic as he or she sees fit. The attacker will be performing a man-in-the-middle attack. Please note that such an attack is trivial using tools as &lt;a href="http://www.oxid.it/cain.html"&gt;Cain and Abel&lt;/a&gt; or &lt;a href="http://monkey.org/~dugsong/dsniff/"&gt;Dsniff&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.com/static/images/maninthemiddleattack.png"&gt;&lt;img alt="mitm" src="https://louwrentius.com/static/images/mitm.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It is often the case that many different server systems are placed in a single network segment or VLAN. That implies that any of these systems poses a threat to each other. It takes just one hacked system to compromise network traffic between all the other systems. This is especially a threat to all unencrypted network traffic, but encrypted sessions may also be attacked if clients don't check the sever's identity. &lt;/p&gt;
&lt;p&gt;&lt;img alt="sharednetwork" src="https://louwrentius.com/static/images/nonetworksegmentation.png" /&gt;&lt;/p&gt;
&lt;p&gt;Unless you've actually implemented proper network segmentation using separate (V)LANS and filter traffic between these network segments through firewalling, your environment may be at risk. In that case, please understand that it takes just one single web application containing just one vulnerability to compromise the entire environment.&lt;/p&gt;
&lt;p&gt;&lt;img alt="dedicatednetwork" src="https://louwrentius.com/static/images/withnetworksegmentation.png" /&gt;&lt;/p&gt;
&lt;p&gt;Not everybody has implemented proper network segmentation and firewalling, preventing these kind of attacks. And it takes quite some labour to change all that. So what can you do, assuming that you want to do something right now?&lt;/p&gt;
&lt;p&gt;In general, in a shared network environment, as described, the only way to make sure that data in transit is kept confidential and unmodified is to make proper use of encryption, identification and authentication.&lt;/p&gt;
&lt;h2&gt;The solution&lt;/h2&gt;
&lt;p&gt;To secure web traffic, there is already a fairly easy solution: using HTTPS or HTTP over SSL. The most difficult part is getting a valid SSL certificate and configuring the HTTP server to use it.&lt;/p&gt;
&lt;p&gt;But if you want to transfer files between servers or between clients and servers? How about that?&lt;/p&gt;
&lt;p&gt;Is there actually an easy way to securely transfer files between two hosts? From what I can see, the answer is "no". Security comes with some additional effort and it isn't easy.&lt;/p&gt;
&lt;p&gt;The first problem is to understand what 'secure' actually means. To me, it means that data is not stolen or modified by an attacker during transit.&lt;/p&gt;
&lt;p&gt;There are three requirements to make sure that confidentiality and integrity is guaranteed:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;data in transit is encrypted;&lt;/li&gt;
&lt;li&gt;the client authenticates the server;&lt;/li&gt;
&lt;li&gt;the server authenticates the client.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Encryption prevents a man-in-the-middle attacker from eavesdropping or altering data. And if the client verifies the identity of the server, the attacker cannot impose the real, genuine server. This part may be overlooked. "I'm using SSL, thus encryption, so I am safe, right?". That is a firm negative. The client is identified with a password, passphrase or client-side SSL certificate. But how does the client identify the server? If the client doesn't verify the identity of the server, you might as well turn encryption off. &lt;/p&gt;
&lt;p&gt;For the Windows platform, there is no native solution. Most of the time, files are transfered using SMB and thus your files can be grabbed from the wire and you may be transferring your files to some impostor instead of the genuine host.&lt;/p&gt;
&lt;p&gt;The other often-used solution is FTP. And everyone knows that the biggest problem with FTP is the lack data encryption. All communication, including credentials for authentication, are transmitted in plain-text.&lt;/p&gt;
&lt;p&gt;Without any additional third-party software, it is impossible to securely transfer files between Windows hosts, except for one solution that I have never seen used: IPsec. IPsec is used to encrypt any network traffic between two host, thus also SMB traffic. &lt;/p&gt;
&lt;p&gt;The Unix world has only one solution without using third-party tools and that is transferring files using SSH as a secure transport. But SSH is also used for secure shell access to hosts and it may be difficult to prevent shell access and still allow file transfers.&lt;/p&gt;
&lt;p&gt;So now there is a new tendency to use FTP over SSL. You have the same inconvenience as with HTTPS: you need to install a valid SSL certificate on each FTPS server. And although this does improve security, encryption is still useless if the client side system does not properly validate the server's identity.&lt;/p&gt;
&lt;p&gt;Furthermore FTP uses a control channel for commands and a separate data channel to transfer the actual data. You want both channels to be encrypted, but that may not be the default. Check your FTP server's configuration to make sure this is the case.&lt;/p&gt;
&lt;h2&gt;Implementations&lt;/h2&gt;
&lt;p&gt;To implement FTP over SSL on Windows, you might want to take a look at the &lt;a href="http://filezilla-project.org/"&gt;Filezilla&lt;/a&gt; server. An FTP server that also supports FTP over SSL. It had some security vulnerabilities in the past but not too many. To me, it is a better solution than to expose TCP-port 445 to other systems. The SMB service doesn't have a good &lt;a href="http://secunia.com/community/advisories/search/?search=windows+smb"&gt;security track-record&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;For Unix environments, take a look at VSFTPD. The Very Secure FTP daemon, is written by Cris Evans, who works on the security team for Google. The irony is that although VSFTPD itself doesn't seem to be affected by any security vulnerability itself, the hosting provider hosting the software was compromised by an the attacker. This attacker put a back-door in a specific VSFTPD release. &lt;/p&gt;
&lt;p&gt;Anyway, I still recommend VSFTPD as it is very well-document and the configuration is simple.&lt;/p&gt;
&lt;p&gt;If one of these solutions is not an option for your particular situation, you might think about using your existing insecure file transfer method on top of a VPN connection that handels authentication and encryption, such as &lt;a href="http://openvpn.net/"&gt;OpenVPN&lt;/a&gt;. But setting up OpenVPN within such an environment may also be cumbersome.&lt;/p&gt;
&lt;p&gt;Recent events regarding compromised certificate authorities show that the trust model SSL-authentication often leans upon may be broken. You must be sure which certificate authorities to trust. If you have your own certificate authority, make sure you take every precaution to keep it secured. &lt;/p&gt;
&lt;p&gt;Question: should the client rely on build in CA certificates, the sames as present in your browser? Or are you going to configure the client to accept only the single certificate of the server?&lt;/p&gt;
&lt;p&gt;If you have any other suggestions for a simple solution to securely transfer files between hosts, feel free to leave a comment.&lt;/p&gt;</content><category term="Security"></category></entry><entry><title>Script that deletes old files to keep disk from filling up</title><link href="https://louwrentius.com/script-that-deletes-old-files-to-keep-disk-from-filling-up.html" rel="alternate"></link><published>2011-08-19T00:00:00+02:00</published><updated>2011-08-19T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-08-19:/script-that-deletes-old-files-to-keep-disk-from-filling-up.html</id><summary type="html">&lt;p&gt;When a disk has no free space left, all kinds of trouble can occur. &lt;/p&gt;
&lt;p&gt;Therefore, I've created a &lt;a href="https://louwrentius.com/files/deleteoldfiles.sh"&gt;script&lt;/a&gt; that monitors the used space of a volume
and deletes the oldest file if a certain threshold is reached. &lt;/p&gt;
&lt;p&gt;The script will keep on deleting the oldest file present on disk …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When a disk has no free space left, all kinds of trouble can occur. &lt;/p&gt;
&lt;p&gt;Therefore, I've created a &lt;a href="https://louwrentius.com/files/deleteoldfiles.sh"&gt;script&lt;/a&gt; that monitors the used space of a volume
and deletes the oldest file if a certain threshold is reached. &lt;/p&gt;
&lt;p&gt;The script will keep on deleting the oldest file present on disk until used
capacity is below the threshold.&lt;/p&gt;
&lt;p&gt;So you can tell the script to monitor volume /storage and delete old files if
the used capacity is bigger than 95 percent.&lt;/p&gt;
&lt;p&gt;The script works like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;./deleteoldfiles.sh &amp;lt;mount point&amp;gt; &amp;lt;percentage&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The mount point represents a volume or physical disk. The percentage represents
the maxium used capacity threshold. &lt;/p&gt;
&lt;p&gt;The script reads the output of the 'df -h' command to determine 'disk' usage.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;bash-3.2$ ./deleteoldfiles.sh /Volumes/usb 92

DELETE OLD FILES 1.00

Usage of 90% is within limit of 92 percent.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;How let's see what happens when the threshold is exceeded.&lt;/p&gt;
&lt;p&gt;bash-3.2$ sudo ./deleteoldfiles.sh /Volumes/usb 92&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;DELETE OLD FILES 1.00

Usage of 97% exceeded limit of 92 percent.
Deleting oldest file /Volumes/usb/a/file02.bin
Usage of 91% is within limit of 92 percent.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Here you notice that an old file is deleted and that the script checks again
if there is now enough free space. If not, another file would have been deleted.&lt;/p&gt;
&lt;p&gt;If you have a need for it, have fun. It was a fun little scripting exercise.&lt;/p&gt;
&lt;p&gt;The script works under Linux and Mac OS X.&lt;/p&gt;</content><category term="Linux"></category></entry><entry><title>Lion's FileVault does not support Bootcamp and external boot disks</title><link href="https://louwrentius.com/lions-filevault-does-not-support-bootcamp-and-external-boot-disks.html" rel="alternate"></link><published>2011-08-05T01:00:00+02:00</published><updated>2011-08-05T01:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-08-05:/lions-filevault-does-not-support-bootcamp-and-external-boot-disks.html</id><summary type="html">&lt;p&gt;&lt;strong&gt;Read the comments as they may provide useful information for your particular situation&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I boot my iMac from an external FW800 SSD. I found out that it is impossible to
encrypt this disk using the new FileVault as part of Lion.&lt;/p&gt;
&lt;p&gt;&lt;img alt="no filevault" src="https://louwrentius.com/static/images/nofv.png" /&gt;&lt;/p&gt;
&lt;p&gt;Furthermore, I also found out that if you have …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;strong&gt;Read the comments as they may provide useful information for your particular situation&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I boot my iMac from an external FW800 SSD. I found out that it is impossible to
encrypt this disk using the new FileVault as part of Lion.&lt;/p&gt;
&lt;p&gt;&lt;img alt="no filevault" src="https://louwrentius.com/static/images/nofv.png" /&gt;&lt;/p&gt;
&lt;p&gt;Furthermore, I also found out that if you have a disk with a Bootcamp partition
FileVault will also refuse to start the encryption process. I'm not trying to 
encrypt the Bootcamp volume, just the bootable Mac OS X Lion installation.&lt;/p&gt;
&lt;p&gt;&lt;img alt="no encryption with bootcamp" src="https://louwrentius.com/static/images/nobootcampfv.png" /&gt;&lt;/p&gt;
&lt;p&gt;It may be advised to stay away from Lion if you need a setup similar to this one 
and also need disk encryption. &lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Lion's Disk Utility not compatible with CoreStorage and Filevault</title><link href="https://louwrentius.com/lions-disk-utility-not-compatible-with-corestorage-and-filevault.html" rel="alternate"></link><published>2011-08-03T00:00:00+02:00</published><updated>2011-08-03T00:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-08-03:/lions-disk-utility-not-compatible-with-corestorage-and-filevault.html</id><summary type="html">&lt;p&gt;My 1 TB hard drive of my 2011 27" iMac was partitioned with:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A bootable partition with Mac OS X Lion&lt;/li&gt;
&lt;li&gt;Time machine partition (I use an external SSD as my main OS)&lt;/li&gt;
&lt;li&gt;Bootcamp partition with windows&lt;/li&gt;
&lt;li&gt;A data partition containg... data.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The problem is that CoreStorage is too new …&lt;/p&gt;</summary><content type="html">&lt;p&gt;My 1 TB hard drive of my 2011 27" iMac was partitioned with:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A bootable partition with Mac OS X Lion&lt;/li&gt;
&lt;li&gt;Time machine partition (I use an external SSD as my main OS)&lt;/li&gt;
&lt;li&gt;Bootcamp partition with windows&lt;/li&gt;
&lt;li&gt;A data partition containg... data.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The problem is that CoreStorage is too new. Disk Utility in Lion cannot cope with CoreStorage volumes. So when I decided to encrypt the bootable partition using the new Full Disk Encryption based on filevault, I could no longer manage my other partitions.  &lt;/p&gt;
&lt;p&gt;&lt;img alt="disk utility" src="https://louwrentius.com/static/images/du01.png" /&gt;&lt;/p&gt;
&lt;p&gt;Furthermore, bootcamp got killed since it needs to be installed on one of the first three partitions on the disk. Due to the whole CoreStorage stuff and filevault, it became the fifth partition and it got killed. I couldn't get it back to life It wouldn't boot.&lt;/p&gt;
&lt;p&gt;What I want now is to create a setup where I have three partitions:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A bootable (Boot) clone of my external FW800 SSD boot disk using SuperDuper&lt;/li&gt;
&lt;li&gt;A Bootcamp volume running Windows (for games)&lt;/li&gt;
&lt;li&gt;A data partition storing well.. data.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I want to encrypt the boot disk and the data partition. If this is going to work, I don't know. &lt;/p&gt;
&lt;p&gt;It may be advised to stay away from Lion if you need a setup similar to this one and also need disk encryption. &lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Achieving 220 MB/s network file transfers using Linux Bonding</title><link href="https://louwrentius.com/achieving-220-mbs-network-file-transfers-using-linux-bonding.html" rel="alternate"></link><published>2011-07-29T01:00:00+02:00</published><updated>2011-07-29T01:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-07-29:/achieving-220-mbs-network-file-transfers-using-linux-bonding.html</id><summary type="html">&lt;p&gt;I wrote an &lt;a href="https://louwrentius.com/blog/2010/11/linux-network-interface-bonding-trunking-or-how-to-get-beyond-1-gbs/"&gt;article&lt;/a&gt; about the subject of getting beyond the limits of gigabit network file transfers. My solution is to use multiple gigabit network cards and use Linux interface bonding to create virtual 2 gigabit network interfaces. The solution is to use mode 0 or round robin bonding. I …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I wrote an &lt;a href="https://louwrentius.com/blog/2010/11/linux-network-interface-bonding-trunking-or-how-to-get-beyond-1-gbs/"&gt;article&lt;/a&gt; about the subject of getting beyond the limits of gigabit network file transfers. My solution is to use multiple gigabit network cards and use Linux interface bonding to create virtual 2 gigabit network interfaces. The solution is to use mode 0 or round robin bonding. I do not use a switch although this also works fine. Instead, I just connected two cabled between the two machines.&lt;/p&gt;
&lt;p&gt;In my original article, I couldn't get pas 150 MB/s file transfer speeds so the results weren't that great. However, these poor results were due to hardware compatibility issues. Although the on board network card worked fine, the intel e1000e card in the PCIe slot didn't perform well. I replaced it with a HP Broadcom card and everything is working smooth now.&lt;/p&gt;
&lt;p&gt;With two gigabit network cards bonded together I can achieve 220 MB/s through a single file transfer over NFS. &lt;/p&gt;
&lt;p&gt;It would be interesting if a quad port server adapter would be able to achieve 440 MB/s network speeds, but I don't have the equipment to test this.  &lt;/p&gt;</content><category term="Networking"></category></entry><entry><title>Additional proof that Apple is ditching the optical drive</title><link href="https://louwrentius.com/additional-proof-that-apple-is-ditching-the-optical-drive.html" rel="alternate"></link><published>2011-07-23T15:00:00+02:00</published><updated>2011-07-23T15:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-07-23:/additional-proof-that-apple-is-ditching-the-optical-drive.html</id><summary type="html">&lt;p&gt;I'm a &lt;a href="https://louwrentius.com/blog/2009/10/blu-ray-is-dead/"&gt;strong advocate&lt;/a&gt; of &lt;a href="https://louwrentius.com/blog/2010/10/apple-is-killing-off-the-optical-drive-just-like-the-floppy-disk/"&gt;killing the optical drive&lt;/a&gt;. As of 2011, there is no need for it anymore. Laptops could get lighter, smaller or have more room for additional battery capacity if the optical drive would no longer be present.&lt;/p&gt;
&lt;p&gt;In my life, I never see people use the …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I'm a &lt;a href="https://louwrentius.com/blog/2009/10/blu-ray-is-dead/"&gt;strong advocate&lt;/a&gt; of &lt;a href="https://louwrentius.com/blog/2010/10/apple-is-killing-off-the-optical-drive-just-like-the-floppy-disk/"&gt;killing the optical drive&lt;/a&gt;. As of 2011, there is no need for it anymore. Laptops could get lighter, smaller or have more room for additional battery capacity if the optical drive would no longer be present.&lt;/p&gt;
&lt;p&gt;In my life, I never see people use the optical drive. And why would you use them any more? Isn't it so that if you are still using CDs or DVDs with your computer, that you do it out of (a bad) habit? And if you really can't part with your CDs or DVDs, would an external USB optical drive be a usable solution? &lt;/p&gt;
&lt;p&gt;I think that we are at a point where most people don't even know that their computer has an optical drive. &lt;/p&gt;
&lt;p&gt;With the release of the new 2011 Mac Mini, Apple dropped the optical drive yet again. They first dropped it from the MacBook Air and now the Mini. &lt;/p&gt;
&lt;p&gt;What is next? Well that is clear. New Macs &lt;a href="http://support.apple.com/kb/HT4718"&gt;will be able to boot over the Internet&lt;/a&gt; from Apple's servers. Again, no need for an optical drive, even for reinstalling your computer. I wouldn't be surprised if the next generation of MacBook Pro laptops would not contain an optical drive. Maybe some people aren't ready for it but people should rejoice since it would make MacBooks thinner and lighter.&lt;/p&gt;
&lt;p&gt;As apple killed the floppy drive, it is now killing the optical drive. &lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Cheap solution for putting an SSD in an iMac</title><link href="https://louwrentius.com/cheap-solution-for-putting-an-ssd-in-an-imac.html" rel="alternate"></link><published>2011-07-11T20:00:00+02:00</published><updated>2011-07-11T20:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-07-11:/cheap-solution-for-putting-an-ssd-in-an-imac.html</id><summary type="html">&lt;p&gt;If you want to order a new iMac with an SSD there are two problems:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;An SSD of 256 gigabytes will cost you a lot: 600 euros.&lt;/li&gt;
&lt;li&gt;The SSD is of medium quality, and does not justify the cost of 600 euros.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a href="http://forums.macrumors.com/showthread.php?t=1159154"&gt;Here&lt;/a&gt; you can find information about the Toshiba …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you want to order a new iMac with an SSD there are two problems:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;An SSD of 256 gigabytes will cost you a lot: 600 euros.&lt;/li&gt;
&lt;li&gt;The SSD is of medium quality, and does not justify the cost of 600 euros.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a href="http://forums.macrumors.com/showthread.php?t=1159154"&gt;Here&lt;/a&gt; you can find information about the Toshiba SSD that Apple provides.&lt;/p&gt;
&lt;p&gt;You can consider buying an SSD for half the price and put it inside your new iMac yourself afterwards. However, this is a &lt;a href="http://blog.chargedpc.com/2011/05/2011-imac-ssd-install-guide.html"&gt;hassle&lt;/a&gt; and you will probably void your waranty.&lt;/p&gt;
&lt;p&gt;The solution is to buy an SSD yourself and put it in an external Firewire 800 casing. Firewire 800 provides you with about 70 megabytes per second of troughput, which is enough for most applications. It is about three times faster than USB and the latency of Firewire is lower than that of USB, thus improving responsiveness.&lt;/p&gt;
&lt;p&gt;Thunderbolt would provide the best solution, but as of July 2011, there are no Thunderbolt products on the market. &lt;/p&gt;
&lt;p&gt;The product I use is an external 2,5 inch Firewire 800 casing of OWC: the &lt;a href="http://eshop.macsales.com/item/Other%20World%20Computing/MEQM0GBK/"&gt;Mercury Elite-AL Pro mini&lt;/a&gt;.  This casing cost me about 60 euros or 80 dollars including shipping costs. I admit that this solution is not cheap, but it does work. And you won't void your warranty and it is way cheaper than the Apple SSD.&lt;/p&gt;
&lt;p&gt;&lt;img alt="OWC casing" src="http://eshop.macsales.com/imgs/ndesc/owc_mercuryalpro_mini/owc_eliteal_mini_gall1.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="OWC connectors" src="http://eshop.macsales.com/imgs/ndesc/owc_mercuryalpro_mini/owc_eliteal_mini_gall3.jpg" /&gt; &lt;/p&gt;
&lt;p&gt;I've put my Intel SSD inside this casing and my mac boots in about 15 seconds. After entering username and password, login is almost instantaneously. &lt;/p&gt;
&lt;h3&gt;Caveat&lt;/h3&gt;
&lt;p&gt;I encountered one problem with the OWC casing. I encountered random system freezes, related to the external OWC housing. Using the internal 1 TB hard drive, I had no problems. The cause of this issue is probably related to power not being provided to the OCW casing after the systems returns from sleep. &lt;/p&gt;
&lt;p&gt;I resolved this issue by connecting a special USB cable to the iMac and connecting the other end to the 5V DC Power Input on the OWC casing. After that, the daily random system freezes vanished. These cables can be obtained almost everywhere.&lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>Switching away from Debian to Ubuntu LTS</title><link href="https://louwrentius.com/switching-away-from-debian-to-ubuntu-lts.html" rel="alternate"></link><published>2011-07-06T09:00:00+02:00</published><updated>2011-07-06T09:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-07-06:/switching-away-from-debian-to-ubuntu-lts.html</id><summary type="html">&lt;p&gt;Over the last couple of years, Debian Linux has released new stable versions about every two years. This pace is great for progress, but there is a serious problem. This problem is related to their support for older Debian stable versions. &lt;/p&gt;
&lt;p&gt;If you read the quote below from the &lt;a href="http://www.debian.org/security/faq#proposed-updates"&gt;Debian …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;Over the last couple of years, Debian Linux has released new stable versions about every two years. This pace is great for progress, but there is a serious problem. This problem is related to their support for older Debian stable versions. &lt;/p&gt;
&lt;p&gt;If you read the quote below from the &lt;a href="http://www.debian.org/security/faq#proposed-updates"&gt;Debian Security FAQ&lt;/a&gt; it will dawn upon you:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;    Q:  How long will security updates be provided?
    A:  The security team tries to support a stable distribution 
        for about one year after the next stable distribution has 
        been released, except when another stable distribution is 
        released within this year. It is not possible to support 
        three distributions; supporting two simultaneously is 
        already difficult enough.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Translation: after 3 years, you must apt-get dist-upgrade or be screwed, you will &lt;em&gt;no longer receive security updates!&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Apt-get dist-upgrade or die, so to speak.&lt;/p&gt;
&lt;p&gt;The problem is that the whole apt-get dist-upgrade thing is cool and all, but in my experience, it doesn't work. Even a simple web server gets screwed up badly. You need to diff all config files and spend quite some time reviewing all changes and fixing the broken stuff.&lt;/p&gt;
&lt;p&gt;I'd rather spend the time creating a new fresh Debian installation based on the new stable release than on tinkering with the aftermath of an apt-get dist-upgrade. But that also takes a lot of effort. &lt;/p&gt;
&lt;p&gt;I want an operating system that will be supported for the next five years so I don't have to spend time on this upgrade process every 3 years for a system that is otherwise fully functional and rock solid. &lt;/p&gt;
&lt;p&gt;To tease you a little bit: Microsoft Windows operating systems are supported for ages. But that's not an option for me, I stick with Linux, but Debian does not provide this kind of extended support. &lt;/p&gt;
&lt;p&gt;&lt;a href="https://wiki.ubuntu.com/LTS"&gt;But Ubuntu does.&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;Ubuntu releases LTS versions: Long Term Support versions that will receive security updates for at least 5 years.  &lt;/p&gt;
&lt;p&gt;All the goodness of Debian but with longer support. That is the reason my shop will switch to Ubuntu Server LTS.&lt;/p&gt;</content><category term="Linux"></category></entry><entry><title>Cannot access Windows guest within VMware Fusion when running vsphere client</title><link href="https://louwrentius.com/cannot-access-windows-guest-within-vmware-fusion-when-running-vsphere-client.html" rel="alternate"></link><published>2011-06-17T09:00:00+02:00</published><updated>2011-06-17T09:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-06-17:/cannot-access-windows-guest-within-vmware-fusion-when-running-vsphere-client.html</id><summary type="html">&lt;p&gt;Currently, I am running VMware ESXi 4.1 on a test system. To manage ESXi, you need the VSphere client, which is only available for the Windows platform. Therefore, I run VMware Fusion on my Mac to be able to access VSphere and manage my ESXi host. &lt;/p&gt;
&lt;p&gt;The trouble is …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Currently, I am running VMware ESXi 4.1 on a test system. To manage ESXi, you need the VSphere client, which is only available for the Windows platform. Therefore, I run VMware Fusion on my Mac to be able to access VSphere and manage my ESXi host. &lt;/p&gt;
&lt;p&gt;The trouble is that both ESXi and VMware Fusion use the control-alt shortcut to release a console. So as soon as you start using a console within the VSphere client which itself runs within VMware Fusion, you cannot get back to the Windows OS. &lt;/p&gt;
&lt;p&gt;You will have either access to Mac OS X or the ESXi guest. And to top it of, the mouse just completely disappears on Windows. &lt;/p&gt;
&lt;p&gt;To get arount this problem, you need to somehow be able to send a control-alt sequence to the Windows guest withouth actually pressing control-alt. &lt;/p&gt;
&lt;p&gt;Fortunately, VMware fusion allows you to create a key mapping that allows this. &lt;/p&gt;
&lt;p&gt;Within the preferance pane, the first tab is called Key Mappings. You can create a new key mapping. For example, I mapped control+q to control-alt. This allows me to get out of the ESXi guest within the VSphere client, witouth getting grown back to Mac OS X. As a side effect, the mouse also showed up again, which is to be expected.&lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Performance monitoring using dstat</title><link href="https://louwrentius.com/performance-monitoring-using-dstat.html" rel="alternate"></link><published>2011-05-22T19:00:00+02:00</published><updated>2011-05-22T19:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-05-22:/performance-monitoring-using-dstat.html</id><summary type="html">&lt;p&gt;I'd like to introduce the utility '&lt;a href="http://dag.wieers.com/home-made/dstat/+"&gt;dstat&lt;/a&gt;'. Dstat provides detailed statistics
about what is currently happening on your Linux box.&lt;/p&gt;
&lt;p&gt;Dstat allows you to monitor the system load, disk troughput, disk io, network
bandwith, and many more items. &lt;/p&gt;
&lt;p&gt;Dstat is so valuable because it provides you with all required information …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I'd like to introduce the utility '&lt;a href="http://dag.wieers.com/home-made/dstat/+"&gt;dstat&lt;/a&gt;'. Dstat provides detailed statistics
about what is currently happening on your Linux box.&lt;/p&gt;
&lt;p&gt;Dstat allows you to monitor the system load, disk troughput, disk io, network
bandwith, and many more items. &lt;/p&gt;
&lt;p&gt;Dstat is so valuable because it provides you with all required information on a
single row that updates every so often. It is a great tool for debugging system
performance. An example of the output of dstat:&lt;/p&gt;
&lt;p&gt;&lt;a href="/static/images/dstat01.png"&gt;&lt;img alt="dstat" src="/static/images/dstat01-sm.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Click on this image to view the dstat output at the original size. It will tell you a small story on what happened on the system. &lt;/p&gt;
&lt;p&gt;As you can see, a copy action is going on between two disks. Then suddenly, some other process is writing data to the source disk. Both read and write performance drops. As soon as the additional writing process stops, the read and write performance of the still running copy process returns back to normal. &lt;/p&gt;
&lt;p&gt;The real benefit of this utility is that it clearly provides almost all information you might want to know in a single line. &lt;/p&gt;
&lt;p&gt;One of the most used options are the disk throughput and network throughput columns. By default, dstat displays the aggregated throughput for all disks and network cards. You can bypass this behavior by specifying individual disks or network devices with the -D or -N option, like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;dstat -D sda,sdb -N eth0,eth1 20
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please note that the '20' argument at the end specifies how often the screen gets updated. Thus every single row is the average of that 20 second time frame. &lt;/p&gt;
&lt;p&gt;For a full overview of all options that are available, issue the 'dstat --list' command.&lt;/p&gt;
&lt;p&gt;You may find it very useful.&lt;/p&gt;</content><category term="Linux"></category></entry><entry><title>Determining smartphone market share using wireless sniffing</title><link href="https://louwrentius.com/determining-smartphone-market-share-using-wireless-sniffing.html" rel="alternate"></link><published>2011-04-24T23:00:00+02:00</published><updated>2011-04-24T23:00:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-04-24:/determining-smartphone-market-share-using-wireless-sniffing.html</id><summary type="html">&lt;p&gt;I started a project to see if I could track smartphone users by sniffing for
wifi-clients. Most smartphones support wifi and most people don't bother disabling
wifi when they go outdoors&lt;a href="http://www.aircrack-ng.org/doku.php?id=airmon-ng"&gt;1&lt;/a&gt;. If wifi is left on, it is possible to detect these
smartphones and track their movement. To be …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I started a project to see if I could track smartphone users by sniffing for
wifi-clients. Most smartphones support wifi and most people don't bother disabling
wifi when they go outdoors&lt;a href="http://www.aircrack-ng.org/doku.php?id=airmon-ng"&gt;1&lt;/a&gt;. If wifi is left on, it is possible to detect these
smartphones and track their movement. To be able to track smartphones, all I had
to do is grab a computer with a wifi card and start to listen for nearby
smartphones.&lt;/p&gt;
&lt;p&gt;Over de course of 10 days I was able to detect around 590 unique wireless client
devices passing the vicinity of my house (near Amsterdam, The Netherlands). 
Please note that not all of those devices are smartphones, so I have to 
determine which are and which are not. I just used &lt;a href="http://www.aircrack-ng.org/doku.php?id=airmon-ng"&gt;airmon-ng&lt;/a&gt; to sniff 
for wifi clients.&lt;/p&gt;
&lt;p&gt;It is very easy to track a person if wifi is enabled on their smartphone since
the phone will broadcast its unique identifier: its MAC-address. A MAC-address 
is as unique as a phone number so ideal for tracking down people. A single wifi
sniffing computer is not enough to follow people, but if you would setup a grid
of wifi sniffing devices, tracking people would be very easy. &lt;/p&gt;
&lt;p&gt;Then I got bored with this project and decided that if I could get any additional
information out of my data set of 590 wifi clients. The fun thing is that the
first tree parts of a MAC-address disclose the vendor of the device. For
instance, this MAC-address (made anonymous) belongs to HTC, thus is probably an 
HTC smartphone.&lt;/p&gt;
&lt;p&gt;90:27:E4:B7:XX:XX&lt;/p&gt;
&lt;p&gt;There is a whole &lt;a href="http://standards.ieee.org/develop/regauth/oui/oui.txt"&gt;list&lt;/a&gt; that shows which MAC-addresses belong to which
manufacturers. This allows me to create a list of vendors associated with the
MAC-addresses I captured. This is fun, because I can now count how many devices
I 'caught' from a particular vendor. &lt;/p&gt;
&lt;p&gt;The majority of wireless devices are from Apple (64%). The second largest is HTC
(12%). That is an incredible difference between number one and number two. If
these numbers actually mean anything, they are very interesting.&lt;/p&gt;
&lt;p&gt;&lt;img alt="smartphoneshare" src="https://louwrentius.com/static/images/smartphonewifisniffing01.png" /&gt;&lt;/p&gt;
&lt;p&gt;I think this picture is telling, but it's accuracy can be questioned. There are
some problems with my data set. For instance, maybe many people using a
particular brand of smartphone who do often disable wifi to conserve battery
life. &lt;/p&gt;
&lt;p&gt;Also, look whose missing in this list: Sony Ericsson. Are Dutch people not
using Sony Ericsson smartphones? I must say that I deliberately used a Sony
Ericsson smartphone to test my setup and it detected the device without any
problem. So Sony Ericsson devices might not be that popular.&lt;/p&gt;
&lt;p&gt;The main question is which conclusions can be drawn from this data: that iOS
users often leave their wifi enabled and more than other smartphone users?&lt;/p&gt;
&lt;p&gt;It is difficult to say what this data actually means and how accurate it is, but
it may be an interesting technique none the less for real-life sampling of a
smartphone population.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;a href="http://www.aircrack-ng.org/doku.php?id=airmon-ng"&gt;1&lt;/a&gt; Unless your phone is so crappy that it won't hold a charge through the day with wifi enabled.&lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Buying a new computer</title><link href="https://louwrentius.com/buying-a-new-computer.html" rel="alternate"></link><published>2011-04-03T15:43:00+02:00</published><updated>2011-04-03T15:43:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-04-03:/buying-a-new-computer.html</id><summary type="html">&lt;p&gt;*** Desktop or Laptop ***&lt;/p&gt;
&lt;p&gt;When deciding on which computer to buy, the first decision you have to make is about whether to go for a desktop or a laptop. There was a time that many believed that the desktop would become a niche product. Most people want a laptop because they …&lt;/p&gt;</summary><content type="html">&lt;p&gt;*** Desktop or Laptop ***&lt;/p&gt;
&lt;p&gt;When deciding on which computer to buy, the first decision you have to make is about whether to go for a desktop or a laptop. There was a time that many believed that the desktop would become a niche product. Most people want a laptop because they can take their computer everywhere they want to. Or at least lie on the couch while surfing.&lt;/p&gt;
&lt;p&gt;If you want a laptop that also must replace your desktop, it must have a good screen, be fast and have lots of storage. And laptops are so good these days, they can be desktop replacements. Even if you buy a small laptop, they can still be fast and when necessary, you just hook up an external display, mouse and keyboard.&lt;/p&gt;
&lt;p&gt;This is how I worked for many years and I still do. But I consider switching back to a desktop-based computer.&lt;/p&gt;
&lt;p&gt;*** Mac and PC ***&lt;/p&gt;
&lt;p&gt;I have two computers. The first one is an Intel-based Macbook (black) mid-2007, which is my main computer. The second one is a custom-build PC running Windows, solely for the purpose of gaming. This PC is based on Intel's Core i7 920, with 6 GB of RAM, so it's quite fast, even for today standards.&lt;/p&gt;
&lt;p&gt;Both Macbook and PC are connected to a &lt;a href="http://www.aten.com/doc_data/pdf_dm/cs1784_specsheet_en.pdf"&gt;DVI KVM switch&lt;/a&gt;, that allows me to switch the keyboard, video and mouse between those systems. They thus both share my EIZO 24" (1920x1200) screen, keyboard and mouse. &lt;/p&gt;
&lt;p&gt;Apart from these systems, I own an iPhone and an iPad.&lt;/p&gt;
&lt;p&gt;*** Replacing my Macbook ***&lt;/p&gt;
&lt;p&gt;For the last half year, I started to get annoyed with my Macbook. It just got slow. I put in a faster hard drive, but to no avail. The Macbook wasn't responsive enough. So I started thinking about replacing my Macbook. The first thing I realised that I do not need a laptop. I have an iPhone and an iPad, so sending emails, reading web pages, no problem. I do not tend to work on my Macbook outside of my house. So I don't think I need the mobility of a laptop. I want a fast but quiet, affordable computer attached to a high-resolution screen. &lt;/p&gt;
&lt;p&gt;If you are a Mac user, there are three options: an Mac mini, an iMac or a Mac pro. I consider the Mac mini to low on specs and the Mac pro is just way too expensive. That leaves the option of an iMac.&lt;/p&gt;
&lt;p&gt;*** Going for an iMac ***&lt;/p&gt;
&lt;p&gt;Since I love high-resolution displays, the iMac 27" does appeal to me very much. The screen is just gorgeous. And you can put an Intel Core i7 into it, so it can be fast too. Even the video card is decent enough to play most games, either on Mac OS X or on Windows using Bootcamp.&lt;/p&gt;
&lt;p&gt;Then after a discussion with a friend, I realised that the iMac would replace my PC. And my EIZO screen. New iMacs will probably arrive this summer, sporting even better processors, video cards, probably Thunderbolt. So I think I've made up my mind. I would still have my 'old' Macbook if I really needed any mobility. But I expect that once I have an iMac, both my PC and my Macbook are better off in the hands of somebody else. And I will need the money, an iMac is not cheap.&lt;/p&gt;
&lt;p&gt;Thinking about this all led me to believe that in general, I would rather go for a setup with an iMac and a very thin plus lightweight laptop like the Macbook Air, than to buy a chunky desktop-replacing laptop like a Macbook pro. The big advantage of the latter is that you always have all your stuff with you. But you pay a price. The first thing is the weight. I really dislike the weight of my Macbook at 2+ kilos. The second thing is the fact that you always have to (dis)connect all these cables every time you switch between "desktop mode" and "laptop mode". The third thing is the noise. When putting some strain on the processor and/or video card, laptops tend to get noisy. &lt;/p&gt;
&lt;p&gt;A desktop can give you true performance while still keeping things quiet and provide you with ample screen real estate. A netbook-like device such as the Macbook Air can give you true portability. I won't buy a Macbook Air, but this would be a really nice setup. The only thing you need to fix is the syncing problem. But there are services like Dropbox that may help you with that.&lt;/p&gt;
&lt;p&gt;*** Keeping my Macbook alive ***&lt;/p&gt;
&lt;p&gt;My PC has an Intel X25-M Postville 160 GB SSD installed as a fast boot disk. Since I couldn't stand the unresponsiveness of my Macbook anymore, I decided to put this SSD into my Macbook. This is cheaper than buying a new mac and I want to wait for that until the new iMac models arrive. &lt;/p&gt;
&lt;p&gt;Installing the SSD made my Macbook come alive again. The Macbook only sports a SATA 150 interface, but it's not about throughput. It's about random IO performance. I am very happy with the result, there is less of an urgent 'need' to replace the Macbook in the near future. I can wait calmly on the new iMacs and decide then what I want.&lt;/p&gt;
&lt;p&gt;*** Other people's thoughts ***&lt;/p&gt;
&lt;p&gt;There is also this &lt;a href="http://5by5.tv/talkshow/35"&gt;podcast&lt;/a&gt; where John Gruber of &lt;a href="http://daringfireball.net"&gt;Daring Fireball&lt;/a&gt; is telling about is his purchase of a Macbook Air and suggesting that he will replace his Macbook Pro 15" with an iMac eventually. &lt;/p&gt;</content><category term="Hardware"></category></entry><entry><title>Why I do not use ZFS as a file system for my NAS</title><link href="https://louwrentius.com/why-i-do-not-use-zfs-as-a-file-system-for-my-nas.html" rel="alternate"></link><published>2011-02-28T21:00:00+01:00</published><updated>2011-02-28T21:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-02-28:/why-i-do-not-use-zfs-as-a-file-system-for-my-nas.html</id><summary type="html">&lt;p&gt;Many people have asked me why I do not use ZFS for my NAS storage box. This is a good question and I have multpile reasons why I do not use ZFS and probably never will.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;** A lot has changed since this article was first published. &lt;a href="https://louwrentius.com/why-i-do-use-zfs-as-a-file-system-for-my-nas.html"&gt;I do now recommend …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;Many people have asked me why I do not use ZFS for my NAS storage box. This is a good question and I have multpile reasons why I do not use ZFS and probably never will.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;** A lot has changed since this article was first published. &lt;a href="https://louwrentius.com/why-i-do-use-zfs-as-a-file-system-for-my-nas.html"&gt;I do now recommend using ZFS&lt;/a&gt;. I've also based my new &lt;a href="https://louwrentius.com/74tb-diy-nas-based-on-zfs-on-linux.html"&gt;71 TiB NAS&lt;/a&gt; on ZFS. **&lt;/p&gt;
&lt;hr /&gt;
&lt;h3&gt;The demise of Solaris&lt;/h3&gt;
&lt;p&gt;ZFS is invented by Sun for the Solaris operating system. When I was building my NAS, the only full-featured and production-ready version of ZFS is implemented in Sun Solaris. The only usable version of Solaris was Open Solaris. I dismissed using Open Solaris because of the lack of hardware support and the small user base. This small user base is very important to me. More users is more testing. More support.  &lt;/p&gt;
&lt;p&gt;The FreeBSD implementation of ZFS became only stable in January 2010, 6 months after I build my NAS (summer 2009). So FreeBSD was not an option at that time.&lt;/p&gt;
&lt;p&gt;I am glad that I didn't go for Open Solaris, as Suns new owner Oracle has killed this operating system in August 2010. Although ZFS is open source software, I think it is actually closed source already. The only open source version was through Open Solaris. That software is now killed. Oracle will close the source of ZFS just by not publishing the code of new features and updates. Only their proprietary closed source Solaris platform will obtain updates. But I must say that I don't have proof on this. However, Oracle seems to have at least no interest in open source software and almost seems to be hostile towards it.&lt;/p&gt;
&lt;h3&gt;FreeBSD and ZFS&lt;/h3&gt;
&lt;p&gt;So I build my NAS when basically ZFS was not around yet. But with FreeBSD as of today you can build a NAS based on ZFS right? Sure, you can do that. I had no choice back then but you do. But to be honest, I still would not use ZFS. As of March 1th, 2011, I would still go with Linux software RAID and XFS.&lt;/p&gt;
&lt;p&gt;The reasons are maybe not that great, I just provide them for you. It's up for you to decide.&lt;/p&gt;
&lt;p&gt;I sincerely do respect the FreeBSD community and platform, but it is not for me. It may be that I have just much more experience with Debian Linux and just don't like changing platforms. I find the installation process much more user friendly, I see a year over year improvement on Debian, I see none on the 8.2 FreeBSD release.  Furthermore, I'm just thrilled with the really big APT repository. Last, I cannot oversee future requirements. But I'm sure that those requirements have a higher chance to support Linux than BSD.  &lt;/p&gt;
&lt;p&gt;Furthermore, although FreeBSD has a community, it is relatively small. Resources on Debian an Ubuntu are abundant. I consider Linux a safer bet, also on the part of hardware support. My NAS must be simple to build and rock stable. I don't want to have a day time job just getting my NAS to work and maintain it.&lt;/p&gt;
&lt;p&gt;If you are experienced with FreeBSD, by all means, built a ZFS setup if you want. If you have to learn either BSD or Linux, I consider knowledge about Linux more valuable in the long run.&lt;/p&gt;
&lt;h3&gt;ZFS is a hype&lt;/h3&gt;
&lt;p&gt;This is the part where people may strongly disagree with me. I admire ZFS, but I consider it total overkill for home usage. I have seen many people talking about ZFS like Apple users about Apple products. It is a hype. Don't get me wrong. As a long-time Mac user I'm also mocking myself here. I get the impression that ZFS is regarded as the second coming of Jesus Christ. It solves problems that I didn't know of in the first place. The only thing it can't do is beat Chuck Norris. But it does vacuum your house if you ask it to.&lt;/p&gt;
&lt;p&gt;As a side note, one of the things I do not like about ZFS is the terminology. It is just RAID 0, RAID 1, RAID 5 or 6 but no, the ZFS people had to use different, more cool sounding terms like RAID Z or something. But it is basically the same thing. &lt;/p&gt;
&lt;p&gt;Okay, now back to the point: nobody at home needs ZFS. You may argue that nobody needs 18 TB of storage space at home, but that's another story. Running ZFS means using FreeBSD or an out-of-the-box NAS solution based on FreeBSD. And there aren't any other relevant options. &lt;/p&gt;
&lt;p&gt;Now, lets take a look at the requirements of most NAS builders. They want as much storage that is possible at the lowest price possible. That's about it. Many people want to add additional disk drives as their demand for storage capacity increases. So people buy a solution with a capacity for say 10 drives and start out with 4 drives and add disks when they need it.&lt;/p&gt;
&lt;p&gt;Linux allows you to 'grow' or 'expand' an array, just like most hardware RAID solutions. As far as I know, this is a feature is still not available in ZFS. Maybe this feature is not relevant in the enterprise world, but it is for most people who actually have to think about how they spend their money.&lt;/p&gt;
&lt;p&gt;Furthermore, I don't understand Why I can run any RAID array with decent performance with maybe 512 MB of RAM while ZFS would just totally crash with so little memory installed. You seem to need at least 2 GB to prevent crashing your system. More is recommended if you want to prevent it from crashing under high load or something. I really can't wrap my mind about this. Honestly, I think this is insane.&lt;/p&gt;
&lt;p&gt;ZFS does great things. Management is easy. Many features are cool. Snapshots, other stuff. But most features are just not required for a home setup. ZFS seems to solve a lot of 'scares' that I've only heard about since ZFS came along. Like the RAID 5/6 write hole. Where others just hookup a UPS in the first place (if you don't use a UPS on your NAS, you might as well also try and see if you are lucky running RAID 0) they find a solution that prevents data loss when power fails. One of the most interesting features to me is though that ZFS checksums all data and detects corruption. But I like it because it sounds useful, but how high are the chances that you need this stuff? &lt;/p&gt;
&lt;p&gt;If ZFS would be available under Linux as a native option instead of through FUSE, I would probably consider using it if I would know in advance that I would not want to expand or grow my array in the future. But I am pessimistic about this scenario. It is not in Oracle's interest to change the license on ZFS in order to allow Linux to incorporate support for it in the kernel.&lt;/p&gt;
&lt;p&gt;To build my 20 disk RAID array, I had to puzzle with my drives to keep all data while migrating to the new system. Some of the 20 disks came from my old NAS system, so I had to repeatedly grow the array and add disks, which I couldn't have done with ZFS.&lt;/p&gt;
&lt;h3&gt;Why I choose to build this setup.&lt;/h3&gt;
&lt;p&gt;The array is just a single 20 disk RAID 6 volume created with a single MDADM command. The second command I issued to make my array operational was to format this new 'virtual' disk with XFS, which just takes seconds. A UPS protects the systems against power failure and I'm happy with it for 1.5 years now. Never had any problems. Never had a disk failure... A single RAID 6 array is simple and fast. XFS is old but reliable. My whole setup is just this: extremely simple. I just love simple.  &lt;/p&gt;
&lt;p&gt;My array does not use LVM, so I cannot create snapshots or stuff like that. But I don't need it. I just want so much storage that I don't have to think about it. And I think most people just want some storage share with lots of space. In that case, you don't need LVM or stuff like that. Just an array with a file system on top of it. If you can grow the array and the file system, you're set for the future. Speaking about the future: please note that on Linux, XFS is the only file system that is capable of addressing more than 16 TB of data. EXT4 is still limited to 16 TB. &lt;/p&gt;
&lt;p&gt;For the future, my hopes are that BTRFS will become a modern viable alternative to ZFS.&lt;/p&gt;</content><category term="ZFS"></category></entry><entry><title>Thunderbolt, a cheap high-speed storage interconnect?</title><link href="https://louwrentius.com/thunderbolt-a-cheap-high-speed-storage-interconnect.html" rel="alternate"></link><published>2011-02-25T21:30:00+01:00</published><updated>2011-02-25T21:30:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-02-25:/thunderbolt-a-cheap-high-speed-storage-interconnect.html</id><summary type="html">&lt;p&gt;Intel and Apple released &lt;a href="http://www.macworld.com/article/158145/2011/02/thunderbolt_what_you_need_to_know.html"&gt;Thunderbolt&lt;/a&gt; a high-speed (10 Gigabit/s) interface, that seems to replace both USB and Firewire. It is mainly targeted at end-user systems allowing to connect peripherals with just a single cable to a computer. Thunderbolt devices, like external hard drives or displays can be daisy chained …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Intel and Apple released &lt;a href="http://www.macworld.com/article/158145/2011/02/thunderbolt_what_you_need_to_know.html"&gt;Thunderbolt&lt;/a&gt; a high-speed (10 Gigabit/s) interface, that seems to replace both USB and Firewire. It is mainly targeted at end-user systems allowing to connect peripherals with just a single cable to a computer. Thunderbolt devices, like external hard drives or displays can be daisy chained, like Firewire. In short, Thunderbolt removes the cable clutter and ads a significant speed bonus.  &lt;/p&gt;
&lt;p&gt;For NAS owners and storage enthusiasts, this is also a very interesting technology. Just like Firewire, it seems to support computer-to-computer communication. So Thunderbolt could be used as a high-speed link between your homegrown NAS device and your PC workstation. Or between two storage / server system. &lt;/p&gt;
&lt;p&gt;&lt;img alt="Thunderbolt" src="/static/images/thunderbolt01.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;The only downside to Thunderbolt is the maximum cable length of 3 meters between devices. Thunderbolt doesn't seem to be the ideal replacement for your Gigabit network, but if most of your computer systems are close to each other, it might be very interesting.&lt;/p&gt;</content><category term="Storage"></category></entry><entry><title>Setting up a Jabber instant messaging server |_http-title: Site doesn't have a title (text/html; charset=utf-8).</title><link href="https://louwrentius.com/setting-up-a-jabber-instant-messaging-server-_http-title-site-doesnt-have-a-title-texthtml-charsetutf-8.html" rel="alternate"></link><published>2011-02-05T22:00:00+01:00</published><updated>2011-02-05T22:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-02-05:/setting-up-a-jabber-instant-messaging-server-_http-title-site-doesnt-have-a-title-texthtml-charsetutf-8.html</id><summary type="html">&lt;p&gt;I wanted to see how dificult it is to setup an instant messaging server based on open source software. Now I know that it is &lt;em&gt;very&lt;/em&gt; easy, unless you are stubborn and do things your own way. In this example, I'm setting up a small IM server that is only …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I wanted to see how dificult it is to setup an instant messaging server based on open source software. Now I know that it is &lt;em&gt;very&lt;/em&gt; easy, unless you are stubborn and do things your own way. In this example, I'm setting up a small IM server that is only for internal company use, but there is no difference if you want to expose the server to the internet.&lt;/p&gt;
&lt;p&gt;First a bit background information. There is an open IETF standard for instant messaging called "XMPP" which stands for "Extensible Messaging and Presence Protocol". This protocol originated as part of the open source Jabber IM server software.&lt;/p&gt;
&lt;h3&gt;Setting up ejabberd&lt;/h3&gt;
&lt;p&gt;I decided to use &lt;a href="http://www.ejabberd.im/"&gt;ejabberd&lt;/a&gt; which is part of the Debian software archive. It is written in Erlang, but I can live with that. This blog posts documents how to setup the IM server with two accounts that can chat with each other. The configuration I use also enforces the use of SSL/TLS so authentication and all messages are encrypted.&lt;/p&gt;
&lt;p&gt;Steps to get things running:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;apt-get update&lt;/li&gt;
&lt;li&gt;apt-get install ejabberd&lt;/li&gt;
&lt;li&gt;cd /etc/ejabberd&lt;/li&gt;
&lt;li&gt;edit ejabberd.cfg&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Change the following line to your needs: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="c"&gt;%% Hostname&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;hosts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;localhost&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;jabber.domain.local&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;]}.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Also enforce the use of encryption like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;starttls, {certfile, &amp;quot;/etc/ejabberd/ejabberd.pem&amp;quot;}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Must be changed to:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;starttls_required, {certfile, &amp;quot;/etc/ejabberd/server.pem&amp;quot;}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Generating a custom SSL certificate&lt;/h3&gt;
&lt;p&gt;Security wise, it is &lt;em&gt;very&lt;/em&gt; wrong to use the default SSL certificate as provided by the installation package for the server certificate. Anyone with access to this key material can decrypt encrypted communication. So you must generate your own server certificate. This is also required because IM clients may verifiy the certificate against the domain name used within the certificate. If there is no match, it will not work or it will at least complain.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;openssl req -new -x509 -newkey rsa:2048 -days 365 -keyout privkey.pem \ 
-out server.pem
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So this creates a public key (server.pem) and a private key (privkey.pem) which are valid for a year. Feel free to make the certificate valid for a longer period, this is an example. You will have to fill in some stuff, the most important part is this part:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Common Name (eg, YOUR name) []:jabber.domain.local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You are forced to set a password on the private key, but we want to remove this because otherwise the ejabberd service will not start automatically. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;openssl rsa -in privkey.pem -out privkey.pem
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Just enter the password you entered earlier and you're done. We now have separate files for the public and private key, but ejabberd expects them in one single file. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;cat privkey.pem &amp;gt;&amp;gt; server.pem
rm privkey.pem
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Set proper file system permissions:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;chown ejabberd server.pem
chmod 600 server.pem
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now we are done. Restart ejabberd to use the new settings.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/etc/init.d/ejabberd restart
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Security caveats&lt;/h3&gt;
&lt;p&gt;Please note that the ejabberd daemon provides a small build-in web interface for administration purposes on TCP port 5280. By default it is not protected by SSL or TLS and cannot be used unless you add users to this part of the confiuration file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;{acl, admin, {user, &amp;quot;&amp;quot;, &amp;quot;localhost&amp;quot;}}.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;{acl, admin, {user, &amp;quot;admin&amp;quot;, &amp;quot;localhost&amp;quot;}}.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The user must also be registered as a normal IM user as described in the next section.&lt;/p&gt;
&lt;p&gt;Warning: it seems to me that this interface is not very secure, for example, there is no logout button. &lt;/p&gt;
&lt;p&gt;Furthermore, you might consider disabling the following section:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;ejabberd_s2s_in
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This prevents your IM server from communicating with other IM servers &lt;a href="http://www.ejabberd.im/disable-s2s"&gt;source&lt;/a&gt;. But we are not finished. When you install ejabberd, some other services are also started on the system. It is thus very important that you configure your firewall to block these ports. This small nmap port scan output shows some interesting services:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="mf"&gt;4369&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tcp&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kr"&gt;open&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;epmd&lt;/span&gt;&lt;span class="err"&gt;?&lt;/span&gt;
&lt;span class="mf"&gt;5222&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tcp&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kr"&gt;open&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;jabber&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;ejabberd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Protocol&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="mf"&gt;5269&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tcp&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kr"&gt;open&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;jabber&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;ejabberd&lt;/span&gt;
&lt;span class="mf"&gt;5280&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tcp&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kr"&gt;open&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;ejabberd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;admin&lt;/span&gt;
&lt;span class="err"&gt;|&lt;/span&gt;&lt;span class="n"&gt;_http&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;No&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Allow&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ow"&gt;or&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Public&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;header&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;OPTIONS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;400&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="mf"&gt;36784&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tcp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kr"&gt;open&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;unknown&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Port 4369, 36784 and 5280 should be blocked by your firewall and not accessible from the internet.&lt;/p&gt;
&lt;h3&gt;Adding users&lt;/h3&gt;
&lt;p&gt;It is now time to create some IM users. A user account always looks like an email addres, for example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;peter&lt;/span&gt;&lt;span class="nv"&gt;@jabber&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;To add accounts, use the ejabberdctl utiliy:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;ejabberdctl&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;register&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;peter&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;jabber&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;local&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please note that passwords that are entered on the command line end up in your bash_history file, so beware. Also, users running ps aux may be able to see the command for a brief moment. So be carefull.&lt;/p&gt;
&lt;p&gt;By registering two account, you can test your new server. &lt;/p&gt;
&lt;h3&gt;Additional resources&lt;/h3&gt;
&lt;p&gt;Nice to know: the domain names used for your accounts can differ from the domain used for the IM server.&lt;/p&gt;
&lt;p&gt;If you have a Windows Active Directory domain, you could consider authenticating your users against LDAP.&lt;/p&gt;
&lt;p&gt;Other resources:
- &lt;a href="http://www.ejabberd.im/tuto-install-ejabberd"&gt;tutorial 1&lt;/a&gt;
- &lt;a href="http://sysmonblog.co.uk/2008/06/ot-installing-ejabberd-on-debian-ubuntu.html"&gt;tutorial 2&lt;/a&gt;&lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Parallel / distributed password cracking with John the Ripper and MPI</title><link href="https://louwrentius.com/parallel-distributed-password-cracking-with-john-the-ripper-and-mpi.html" rel="alternate"></link><published>2011-02-05T20:00:00+01:00</published><updated>2011-02-05T20:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-02-05:/parallel-distributed-password-cracking-with-john-the-ripper-and-mpi.html</id><summary type="html">&lt;p&gt;&lt;em&gt;This article has been updated to reflect the changes for John version 1.7.8 as released in june 2011.&lt;/em&gt;
&lt;em&gt;The most important change is the fact that MPI support is now integrated in the jumbo  patch.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The original &lt;a href="http://www.openwall.com/john/"&gt;John the Ripper&lt;/a&gt; off-line password cracker only uses a single processor …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;em&gt;This article has been updated to reflect the changes for John version 1.7.8 as released in june 2011.&lt;/em&gt;
&lt;em&gt;The most important change is the fact that MPI support is now integrated in the jumbo  patch.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The original &lt;a href="http://www.openwall.com/john/"&gt;John the Ripper&lt;/a&gt; off-line password cracker only uses a single processor (core) when performing brute-force or dictionary attacks. &lt;/p&gt;
&lt;p&gt;JtR does not use multiple cores (or machines). However, there is a patch available that enables support of &lt;a href="http://en.wikipedia.org/wiki/Message_Passing_Interface"&gt;MPI&lt;/a&gt;. MPI allows you to distribute the workload of a program across multiple instances, thus cores or even machines, but your application must support it. &lt;/p&gt;
&lt;p&gt;The fun thing with MPI is that it is very easy to create a password cracking cluster. But for now let's just focus on using all these unused CPU cores to help us with cracking passwords. &lt;/p&gt;
&lt;p&gt;I am using Ubuntu and Debian Linux as my platform but Mac OS X works also perfectly. &lt;/p&gt;
&lt;h3&gt;install MPI support&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Note: Mac users have mpi support installed by default and don't need to install this.&lt;/em&gt; &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;apt-get install libopenmpi-dev openmpi-bin&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;download John the Ripper with extra patches&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Get the john-1.7.8-jumbo-2.tar.gz file.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;extract John &amp;amp; edit the Make file&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;tar xzf john-1.7.8-jumbo-2.tar.gz&lt;/li&gt;
&lt;li&gt;cd john-1.7.8-jumbo-2/src&lt;/li&gt;
&lt;li&gt;uncomment the following lines in the Makefile:&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;CC = mpicc -DHAVE_MPI -DJOHN_MPI_BARRIER -DJOHN_MPI_ABORT`
MPIOBJ = john-mpi.o`
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Compile John the Ripper with MPI support&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Run make and choose the most appropriate processor architecture. Example:&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nv"&gt;make&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;linux&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;x86&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;bit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;i386&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;make&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;linux&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;x86&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;sse2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;bit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;i386&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;make&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;macosx&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;x86&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;bit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Mac&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;OS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;X&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Test john the Ripper&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;cd ../run&lt;/li&gt;
&lt;li&gt;./john --test&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Look at the benchmark values of the first test and remember them. Now let's see if MPI does any better:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;mpirun -np [number of processor (virtual) cores] ./john --test&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Let's asume that you have an iMac 27" with a Core i7 with 4 real cores and hyper threading enabled. This will provide a total of 8 virtual cores. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;mpirun -np 8 ./john --test&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you notice a significant increase in performance, you know that MPI is working properly.&lt;/p&gt;
&lt;h3&gt;Some benchmarks without and with MPI support (Traditional DES)&lt;/h3&gt;
&lt;p&gt;These are the benchmark test results when using a single core on an old Nehalem Core i7 920:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Many salts: 2579K c/s real, 2579K c/s virtual
Only one salt:  2266K c/s real, 2266K c/s virtual
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;These are the benchmark test results when using MPI and thus all 8 cores:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Many salts: 11015K c/s real, 11015K c/s virtual
Only one salt:  9834K c/s real, 9834K c/s virtual
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And just look at the performance improvement when we overclock from 2,66 to 3,6 Ghz:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Many salts: 15004K c/s real, 15004K c/s virtual
Only one salt:  13232K c/s real, 13232K c/s virtual
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That is very significant. Now admire how the Core i7 920 @ 3.6 Ghz is blown away by the Sandy bridge based Core i7-2600 @ 3.4 Ghz:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Many salts: 20007K c/s real, 20209K c/s virtual
Only one salt:  16881K c/s real, 16881K c/s virtual
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Setting up an MPI cluster&lt;/h3&gt;
&lt;p&gt;MPI clustering is based on using SSH keys. There is a single master that uses all nodes to perform the computation. The nodes are put into a text file nodes.txt like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;node01  slots=2
node02  slots=2
node03  slots=4 
node04  slots=4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this example, node 2 and 3 are dual-core systems, while node 3 and 4 are installed with quad-core processors. You must create an account on all your nodes with the same name that is used on the master, when running the master process. You also must generate a private SSH key and distribute the public part as the authorized_keys file to all nodes. This is outside the scope of this post. Please note that the SSH private key should be loaded with ssh-agent if used with a passphrase, or do not configure a passphrase on the key. If you do not use a pass phrase, understand that anyone with access to the key can access all nodes. &lt;/p&gt;
&lt;p&gt;You may also have to put the nodexx entries in your /etc/hosts file if the names cannot be resolved by DNS.&lt;/p&gt;
&lt;p&gt;Now I'm assuming that you are able to ssh into all nodes without requireing a password, thus ssh is properly setup. &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="k"&gt;*&lt;/span&gt; mpirun -np 12 -hostfile nodes.txt ./john --test
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now you should see increased performance, beyond the limit of a single host.&lt;/p&gt;
&lt;h3&gt;Some benchmarks&lt;/h3&gt;
&lt;p&gt;I ran a password cracking test on some data using a large dictionary. These are the performance differences when using all 8 cores of my Core i7 920 instead of just one:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;single&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;04&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;11192&lt;/span&gt;&lt;span class="n"&gt;K&lt;/span&gt;
&lt;span class="n"&gt;mpi&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;01&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;46568&lt;/span&gt;&lt;span class="n"&gt;K&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The performance increase is significant. &lt;/p&gt;</content><category term="Security"></category></entry><entry><title>The downside of 120 Mbit broadband internet</title><link href="https://louwrentius.com/the-downside-of-120-mbit-broadband-internet.html" rel="alternate"></link><published>2011-01-30T20:00:00+01:00</published><updated>2011-01-30T20:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-01-30:/the-downside-of-120-mbit-broadband-internet.html</id><summary type="html">&lt;p&gt;My Dutch ISP Ziggo provides internet access through DOCSIS cable modems. They are now capable of providging 120 Mbit downstream and 10 Mbit upstream, for an affordable price. &lt;/p&gt;
&lt;p&gt;In a way this is mind boggling. Most people have 100 Mbit home networks that are not capable of handling full capacity …&lt;/p&gt;</summary><content type="html">&lt;p&gt;My Dutch ISP Ziggo provides internet access through DOCSIS cable modems. They are now capable of providging 120 Mbit downstream and 10 Mbit upstream, for an affordable price. &lt;/p&gt;
&lt;p&gt;In a way this is mind boggling. Most people have 100 Mbit home networks that are not capable of handling full capacity. You need at least gigabit gitabit network connectivity on your router and internal network.  &lt;/p&gt;
&lt;p&gt;But there is a problem with all this bandwidth mayhem: &lt;/p&gt;
&lt;p&gt;It is useless. &lt;/p&gt;
&lt;p&gt;The only time I see the full 120 mbit in use is when I do a speed test, or when my mac is downloading system updates. Regular downloading (ISO's, big files from web pages), usenet, bittorrent, they cannot provide content with at the speed my connection is capable of.&lt;/p&gt;
&lt;p&gt;The bottleneck is no longer the connection to the home. The whole internet is now the bottle neck. The content providers are the bottle neck. They cannot seem to cope with this use increase in client side bandwith capacity. They often seem to cap users at a specific download rate, that is way below full capacity. Although the connectivity is relatively cheap, if you can't use it, why pay for it? So downgrading to let's say 50 mbit until content providers are able to handle higher speeds seems the smartest thing to do. &lt;/p&gt;
&lt;p&gt;I must say that I think that content providers are the weakest link. But I cannot be sure. It may be possible that the ISP network, especially their transit links, are the limiting factor. If anyone knows more about this, I'm interested. &lt;/p&gt;</content><category term="Networking"></category></entry><entry><title>'Improved' image gallery for Blogofile</title><link href="https://louwrentius.com/improved-image-gallery-for-blogofile.html" rel="alternate"></link><published>2011-01-21T22:47:00+01:00</published><updated>2011-01-21T22:47:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-01-21:/improved-image-gallery-for-blogofile.html</id><summary type="html">&lt;p&gt;This blog is just static HTML pages that are generated using &lt;a href="http://blogofile.com"&gt;Blogofile&lt;/a&gt;. The Blogofile website sports a small image gallery that has been written to illustrate how to create your own 'controllers' or Blogofile plugins. &lt;/p&gt;
&lt;p&gt;My Python skills are horrible buy I managed to improve a bit on this example …&lt;/p&gt;</summary><content type="html">&lt;p&gt;This blog is just static HTML pages that are generated using &lt;a href="http://blogofile.com"&gt;Blogofile&lt;/a&gt;. The Blogofile website sports a small image gallery that has been written to illustrate how to create your own 'controllers' or Blogofile plugins. &lt;/p&gt;
&lt;p&gt;My Python skills are horrible buy I managed to improve a bit on this example photo gallery. If you click on the '&lt;a href="/pictures"&gt;Photos&lt;/a&gt;' menu at the top of this blog, you can see an example of the result.  &lt;/p&gt;
&lt;h3&gt;What is the improvement?&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Basically one thing: it create a gallery by traversing recursively over a directory structure containing pictures. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;How to install?&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Just create a folder where you will store pictures. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add this to your _config.py&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;photo_gallery = controllers.photo_gallery&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;photo_gallery.enabled = True&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;&lt;code&gt;photo_gallery.path = "pictures"&lt;/code&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;put &lt;a href="/files/photo_gallery.py"&gt;this file&lt;/a&gt; into _controllers/photo_gallery&lt;/li&gt;
&lt;li&gt;enter into this gallery and create a symlink like:&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;ln -s photo_gallery.py __init__.py&lt;/code&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;download the example templates &lt;a href="/files/photo-templates.tgz"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Additional info&lt;/h3&gt;
&lt;p&gt;This photo gallery generator ignores symlinks. However, it looks for a special file name in every directory:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;showcase.jpg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This file can be a symnlink to one of the photo's. It is used as the identifying picture for a folder with pictures. Every directory should contain at least a single showcase.jpg.&lt;/p&gt;
&lt;p&gt;If you have a folder called Europe containing two sub folders England and France, you can create a showcase.jpg symlink to one of the underlying showcase files of one of the folders. This picture will then be used to 'identify' the "Europe" folder.&lt;/p&gt;
&lt;p&gt;Example directory tree:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;./Europe
./Europe/England
./Europe/England/London
./Europe/England/London/showcase.jpg
./Europe/England/showcase.jpg
./Europe/France
./Europe/France/Nice
./Europe/France/Nice/showcase.jpg
./Europe/France/showcase.jpg
./Europe/Netherlands
./Europe/Netherlands/Enkhuizen
./Europe/Netherlands/Enkhuizen/showcase.jpg
./Europe/Netherlands/showcase.jpg
./Europe/showcase.jpg
./USA
./USA/New York
./USA/New York/showcase.jpg
./USA/showcase.jpg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="Uncategorized"></category></entry><entry><title>Firewire: the forgotten security risk</title><link href="https://louwrentius.com/firewire-the-forgotten-security-risk.html" rel="alternate"></link><published>2011-01-18T20:43:00+01:00</published><updated>2011-01-18T20:43:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-01-18:/firewire-the-forgotten-security-risk.html</id><summary type="html">&lt;p&gt;The battle between Firewire and USB has been won by USB, but Firewire is still
arround. It is not that prevalent, cheap computers lack Firewire, but they often
have a PCMCIA slot.&lt;/p&gt;
&lt;p&gt;The thing is this: Firewire allows direct access to all RAM of your computer.
An attacker can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;unlock …&lt;/li&gt;&lt;/ul&gt;</summary><content type="html">&lt;p&gt;The battle between Firewire and USB has been won by USB, but Firewire is still
arround. It is not that prevalent, cheap computers lack Firewire, but they often
have a PCMCIA slot.&lt;/p&gt;
&lt;p&gt;The thing is this: Firewire allows direct access to all RAM of your computer.
An attacker can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;unlock your screensaver without a valid password;&lt;/li&gt;
&lt;li&gt;read the contents of documents or files present in memory;&lt;/li&gt;
&lt;li&gt;defeat FDE (Full Disk Encryption) like Truecrypt or Bitlocker;&lt;/li&gt;
&lt;li&gt;do nasty things with your computer limited only by skill and imagination.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So if you leave your system unattended for a short duration, consider your
system compromised. I mean, if you are a celebrity or something, otherwise, don't worry,
nobody would be bothered since it is a risky attack: the attacker needs
physical access.&lt;/p&gt;
&lt;p&gt;The attacker attacks your laptop by connecting his laptop to yours with a
Firewire cable. Firewire must be enabled in the BIOS of the victim, which is 
the default in most cases. Even if Firewire is disabled, the
PCMCIA adapter can also be used to access RAM. The attacker can insert a PCMCIA 
Firewire controller and use it to attack your laptop.&lt;/p&gt;
&lt;p&gt;Basically, the Attacker makes the attacking laptop pretending to be an iPpod, a
type of advice which is allowed to access memory through DMA. This allows the
attacker READ and WRITE access to memory. The possibilities are endless, but
injecting executable code, compromising the laptop is not far fetched. &lt;/p&gt;
&lt;p&gt;Many people may already have stopped reading because this attack is very old
and widely publicised: it dates back to 2003/2004. However if you are not
familiar with this attack and want to know more about it, &lt;a href="http://www.hermann-uwe.de/blog/physical-memory-attacks-via-firewire-dma-part-1-overview-and-mitigation"&gt;visit this site.&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;The mitigation&lt;/h3&gt;
&lt;p&gt;As far as I know, the only solution to prevent this type of attack is to
disable both Firewire and PCMCIA support in the BIOS. It is smart to protect
the BIOS with a strong password, so both options cannot be enabled.&lt;/p&gt;
&lt;p&gt;I read somewere that most recent Apple Macbook models are no longer
vulnerable, but I could just make that up.&lt;/p&gt;</content><category term="Security"></category></entry><entry><title>Some pretty NAS case designs</title><link href="https://louwrentius.com/some-pretty-nas-case-designs.html" rel="alternate"></link><published>2011-01-08T19:00:00+01:00</published><updated>2011-01-08T19:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-01-08:/some-pretty-nas-case-designs.html</id><summary type="html">&lt;p&gt;If you are planning on building your own NAS box, I have some suggestions
regarding the case.&lt;/p&gt;
&lt;p&gt;In my opinion, most computer case designers have absolutely &lt;a href="http://www.google.com/search?q=Aerocool%C2%A0Vx-e"&gt;no taste&lt;/a&gt;.
Antec and Lian Li do create nice cases, but not that spectacular. And Lian Li is
just very expensive. A fairly new …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you are planning on building your own NAS box, I have some suggestions
regarding the case.&lt;/p&gt;
&lt;p&gt;In my opinion, most computer case designers have absolutely &lt;a href="http://www.google.com/search?q=Aerocool%C2%A0Vx-e"&gt;no taste&lt;/a&gt;.
Antec and Lian Li do create nice cases, but not that spectacular. And Lian Li is
just very expensive. A fairly new company called &lt;a href="http://www.fractal-design.com/"&gt;Fractal Design&lt;/a&gt; seems to be
different. To me, their designs are a breath of fresh air. For instance, if you
want to build just a small NAS system and you think that 6 drives is enough,
take a look at this beauty:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.fractal-design.com/?view=product&amp;amp;category=2&amp;amp;prod=42"&gt;&lt;img alt="small nas box" src="/static/images/fractal-array.jpg" /&gt;&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;Just a small box with room for 6 drives. No hot swap stuff, but for personal
use, that is not that important I guess. Unfortunately, I also found this
&lt;a href="http://www.bit-tech.net/hardware/cases/2010/07/14/fractal-design-array-r2-m-itx-case-review"&gt;review&lt;/a&gt; which also agrees that the case is pretty but there are some
drawbacks. Too bad. I would still want to see more of such cases, but maybe with
a better designed interior.&lt;/p&gt;
&lt;p&gt;If you want to store more drives, Fractal Design seems to have another nice case
in store:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.fractal-design.com/?view=product&amp;amp;category=2&amp;amp;prod=58"&gt;&lt;img alt="mini case" src="/static/images/fractal-mini.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;More like a regular computer case but small and very sleek. Has room for 6
drives but you can use the 5,25 inch slots to store two or three additional
drives. It is still very small. It's fairly new, I found no product reviews.&lt;/p&gt;
&lt;p&gt;If that is not enough, just take a look at the Define R3 case:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.fractal-design.com/?view=product&amp;amp;category=2&amp;amp;prod=52"&gt;&lt;img alt="r3" src="/static/images/fractal-r3-small.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Seems very pretty to me, has room for 8 disks by default and can host at least
10 disks if you also use the 5,25 inch slots. This &lt;a href="http://www.bit-tech.net/hardware/cases/2010/10/29/fractal-design-define-r3-review/"&gt;review&lt;/a&gt; is quite
positive about the case.&lt;/p&gt;
&lt;p&gt;A rather old case from Chieftec which I still like and is quite cheap:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.chieftec.com/CH01.html"&gt;&lt;img alt="chieftec" src="/static/images/chieftech-midi.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This case is not in the same league as the Fractal cases, but is cheaper and
still does the trick. I used this case to build my first NAS and I don't have
much to complain. It is no problem to fit in 10 disks. There are two things to
note:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;there are no filters on the case. You have to create something yourself.&lt;/li&gt;
&lt;li&gt;De space between the side panel and the drive connectors is 'limited'.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you have any suggestions for nice NAS cases, feel free to leave a comment. &lt;/p&gt;</content><category term="Hardware"></category></entry><entry><title>Migration from blogger to blogofile and disqus is complete</title><link href="https://louwrentius.com/migration-from-blogger-to-blogofile-and-disqus-is-complete.html" rel="alternate"></link><published>2011-01-04T00:58:00+01:00</published><updated>2011-01-04T00:58:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2011-01-04:/migration-from-blogger-to-blogofile-and-disqus-is-complete.html</id><summary type="html">&lt;p&gt;As stated in my previous post, I migrated all posts from &lt;a href="https://louwrentius.blogger.com"&gt;blogger.com&lt;/a&gt; to my own host running &lt;a href="http://blogofile.com"&gt;blogofile&lt;/a&gt;.
At the time of my previous post, I was not able to restore all comments made on my old blog. Thanks to some help
from &lt;a href="http://discus.com"&gt;disqcus&lt;/a&gt; I fixed an error on …&lt;/p&gt;</summary><content type="html">&lt;p&gt;As stated in my previous post, I migrated all posts from &lt;a href="https://louwrentius.blogger.com"&gt;blogger.com&lt;/a&gt; to my own host running &lt;a href="http://blogofile.com"&gt;blogofile&lt;/a&gt;.
At the time of my previous post, I was not able to restore all comments made on my old blog. Thanks to some help
from &lt;a href="http://discus.com"&gt;disqcus&lt;/a&gt; I fixed an error on my behalf and all old comments from my older blog appeared on the appropriate pages.&lt;/p&gt;
&lt;p&gt;This completes the transition from blogger to blogofile and discus (for comments). &lt;/p&gt;
&lt;p&gt;I'm now working on a controller for blogofile that recursively generates a static HTML photo blog. Relearning Python along the way.
When working properly, the code will be made available so other people can at least have a good laugh. &lt;/p&gt;
&lt;p&gt;This blog is now running on an old Intel based Mac Mini, also acting as a &lt;a href="http://code.google.com/p/lfs"&gt;firewall&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img alt="mini" src="/static/images/mini-server.jpg" /&gt; &lt;/p&gt;</content><category term="Uncategorized"></category></entry><entry><title>Migrated this blog from blogger.com to blogofile</title><link href="https://louwrentius.com/migrated-this-blog-from-bloggercom-to-blogofile.html" rel="alternate"></link><published>2010-12-30T19:21:00+01:00</published><updated>2010-12-30T19:21:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-12-30:/migrated-this-blog-from-bloggercom-to-blogofile.html</id><summary type="html">&lt;p&gt;Until now, I was hosting my blog on Google's blogger. I switched to using &lt;a href="http://www.blogofile.com" title="Blogofile"&gt;Blogofile&lt;/a&gt;. I wanted to have more control over my content and the layout. &lt;/p&gt;
&lt;p&gt;I had many issues with the blogger blog post editor, resulting in ugly posts with too much white space, strange fonts and fonts …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Until now, I was hosting my blog on Google's blogger. I switched to using &lt;a href="http://www.blogofile.com" title="Blogofile"&gt;Blogofile&lt;/a&gt;. I wanted to have more control over my content and the layout. &lt;/p&gt;
&lt;p&gt;I had many issues with the blogger blog post editor, resulting in ugly posts with too much white space, strange fonts and fonts sizes. Also, showing programming code examples with syntax highlighting is not possible with Blogger. As a nerd, I'm a control freak and I was not in control enough over my blog on blogger. &lt;/p&gt;
&lt;p&gt;So I first started thinking about hosting my own wordpress site, but soon I concluded that this is not what I want. Wordpress has similar limitations as blogger and I no longer want to be depending on some special blog software without direct access to my content. So after a tip of a friend of mine I started to look at blogofile. Blogofile generates a static HTML website, which is a real benefit if you are hosting your own site. The most important benefits are: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Performance&lt;/li&gt;
&lt;li&gt;Ownership&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Security&lt;/h3&gt;
&lt;p&gt;Most websites get hacked because they are basically live applications accessible from the Internet. Applications can contain bugs. If it contains security related bugs, you are toast. Software like &lt;a href="http://secunia.com/advisories/search/?search=wordpress"&gt;wordpress&lt;/a&gt; has an awful track record regarding security. Basically, if you are running a wordpress site, you must be in a state of constant fear. You also constantly have to upgrade to the latest version within a short time frame or you will be hacked. I think that wordpress is a security hazard.&lt;/p&gt;
&lt;p&gt;Blogofile generates plain text static HTML web pages. There is no dynamic content and thus no program running, except for the web server itself. There is no serious security risk.&lt;/p&gt;
&lt;h3&gt;Performance&lt;/h3&gt;
&lt;p&gt;Plain text web sites are fast. It is just data. When a visitor requests some web pages, there is no application running, laying pressure on the CPU, just plain data transfers. I bet you could host this blog without much performance problems on a recent smart phone like an Android or iOS device. &lt;/p&gt;
&lt;h3&gt;Ownership&lt;/h3&gt;
&lt;p&gt;All content is stored in a text-only format that can always easily be transformed into some other format. I use &lt;a href="http://daringfireball.net/projects/markdown/"&gt;markdown&lt;/a&gt; syntax to write new blog posts. But plain HTML can also be used. Comments can be posted through &lt;a href="http://discus.com"&gt;discus&lt;/a&gt;. It seems however that I will loose all existing comments of my original blog. New comments are showing up without a hitch, but existing imported comments do not show up. I do want to preserve them but how? I'm not sure yet. I'm thinking about incorporating them in my posts as an appendix.&lt;/p&gt;
&lt;h3&gt;The whole process of migrating to blogofile&lt;/h3&gt;
&lt;p&gt;All steps taken so far as to migrate away from blogger:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a wordpress installation&lt;/li&gt;
&lt;li&gt;Export blogger blog.&lt;/li&gt;
&lt;li&gt;Import blogger blog in wordpress installation&lt;/li&gt;
&lt;li&gt;'Import' the converted wordpress blog to blogofile 'format' using wordpress2blogofile.py&lt;/li&gt;
&lt;li&gt;Hack, script and fight to change all URLs so they are correct for the new site.&lt;/li&gt;
&lt;li&gt;hack, script and kill to get the old comments to show up, to no avail.&lt;/li&gt;
&lt;li&gt;Learn markdown&lt;/li&gt;
&lt;li&gt;Learn blogofile &lt;/li&gt;
&lt;li&gt;Create + steal a new website design and have to learn css and html all over again. &lt;/li&gt;
&lt;li&gt;Convert all posts from html to markdown.&lt;/li&gt;
&lt;li&gt;Fix all converted posts where necessary&lt;/li&gt;
&lt;li&gt;Put all external hosted content into a local folder and fix all links. &lt;/li&gt;
&lt;/ol&gt;</content><category term="Uncategorized"></category></entry><entry><title>LFS - Linux Firewall Script released</title><link href="https://louwrentius.com/lfs-linux-firewall-script-released.html" rel="alternate"></link><published>2010-12-28T01:15:00+01:00</published><updated>2010-12-28T01:15:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-12-28:/lfs-linux-firewall-script-released.html</id><summary type="html">&lt;p&gt;I started a small new Google project for a new script I wrote called &lt;a href="http://code.google.com/p/lfs/"&gt;LFS&lt;/a&gt;.
It stands for Linux Firewall Script.&lt;/p&gt;
&lt;p&gt;I run a small Linux box as an internet router that doubles as a firewall. The
firewall is configured using iptables. In my opinion, iptables is not the
easiest …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I started a small new Google project for a new script I wrote called &lt;a href="http://code.google.com/p/lfs/"&gt;LFS&lt;/a&gt;.
It stands for Linux Firewall Script.&lt;/p&gt;
&lt;p&gt;I run a small Linux box as an internet router that doubles as a firewall. The
firewall is configured using iptables. In my opinion, iptables is not the
easiest tool to use and may have a steep learning curve for people new to it.&lt;/p&gt;
&lt;p&gt;The goal of LFS is to provide an easier interface to iptables. It also adds
some features that by default are not or difficult to setup using only
iptables. The most important additional feature is the use of objects and
groups. Object groups can be used to make a single rule affect multiple hosts,
networks or services.&lt;/p&gt;
&lt;p&gt;LFS uses a single configuration file which contains the firewall rules. Rules
look like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;nat 192.168.1.0/24 88.32.44.144 eth0
port_forward 88.32.44.144 192.158.1.10 80/tcp 8080/tcp
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Or by using variables:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;nat &amp;quot;$INTERNAL_NETWORK&amp;quot; &amp;quot;$EXTERNAL_IP&amp;quot; &amp;quot;$NAT_INTERFACE&amp;quot;
port_forward &amp;quot;$EXTERNAL_IP&amp;quot;  &amp;quot;$INTERNAL_HTTP_SERVER&amp;quot; &amp;quot;80/tcp&amp;quot; &amp;quot;8080/tcp&amp;quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Please visit the &lt;a href="http://code.google.com/p/lfs/"&gt;project page&lt;/a&gt; for some examples.&lt;/p&gt;</content><category term="Security"></category><category term="Uncategorized"></category></entry><entry><title>Why filtering DHCP traffic is not always possible with iptables</title><link href="https://louwrentius.com/why-filtering-dhcp-traffic-is-not-always-possible-with-iptables.html" rel="alternate"></link><published>2010-12-27T14:57:00+01:00</published><updated>2010-12-27T14:57:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-12-27:/why-filtering-dhcp-traffic-is-not-always-possible-with-iptables.html</id><summary type="html">&lt;p&gt;When configuring my  new firewall using iptables, I noticed something very
peculiar. Even if all input, forward and output traffic was dropped, DHCP
traffic to and from my DHCP server was &lt;strong&gt;not &lt;/strong&gt;blocked even if there were no
rules permitting this traffic.&lt;/p&gt;
&lt;p&gt;I even flushed all rules, put a drop …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When configuring my  new firewall using iptables, I noticed something very
peculiar. Even if all input, forward and output traffic was dropped, DHCP
traffic to and from my DHCP server was &lt;strong&gt;not &lt;/strong&gt;blocked even if there were no
rules permitting this traffic.&lt;/p&gt;
&lt;p&gt;I even flushed all rules, put a drop all rule on all chains and only allowed
SSH to the box. It did not matter. The DHCP server received the DHCP requests
and happily answered back.&lt;/p&gt;
&lt;p&gt;How on earth is this possible? In my opinion, a firewall should block all
traffic no matter what.&lt;/p&gt;
&lt;p&gt;But at least I found out the cause of this peculiar behaviour.  The ISC DHCP
daemon does not use the TCP/UDP/IP stack of the kernel. &lt;a href="http://www.mail-archive.com/netfilter@lists.samba.org/msg03206.html"&gt;It uses RAW
sockets&lt;/a&gt;. Raw sockets bypass the whole netfilter mechanism and thus the
firewall.&lt;/p&gt;
&lt;p&gt;So remember: applications using RAW sockets cannot be fire walled by default.
Applications need root privileges to use RAW sockets, so RAW sockets
thankfully cannot be used by arbitrary unprivileged users on a system, but
never the less. Be aware of this issue.&lt;/p&gt;
&lt;p&gt;Please understand that if a serious security vulnerability is found in the ISC
DHCP daemon, you cannot protect your daemon with a local firewall on your
system. Patching or disabling would then be the only solution.&lt;/p&gt;</content><category term="Security"></category><category term="linux"></category><category term="iptables"></category><category term="raw"></category><category term="sockets"></category><category term="firewall"></category><category term="dhcpd"></category><category term="dhcp"></category></entry><entry><title>Belkin Gigabit USB 2.0 adapter works perfectly with Linux</title><link href="https://louwrentius.com/belkin-gigabit-usb-20-adapter-works-perfectly-with-linux.html" rel="alternate"></link><published>2010-12-08T20:53:00+01:00</published><updated>2010-12-08T20:53:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-12-08:/belkin-gigabit-usb-20-adapter-works-perfectly-with-linux.html</id><summary type="html">&lt;p&gt;My ISP upgraded my internet connection speed to a whopping 120 Mbit. I am
using a &lt;a href="https://louwrentius.com/blog/2009/11/how-to-run-debian-linux-on-an-intel-based-mac-mini/"&gt;mac mini&lt;/a&gt; as my internet router. As you may be aware, the mini has
only one network interface, so I added a second interface using a USB to
ethernet adapter. This adapter was limited …&lt;/p&gt;</summary><content type="html">&lt;p&gt;My ISP upgraded my internet connection speed to a whopping 120 Mbit. I am
using a &lt;a href="https://louwrentius.com/blog/2009/11/how-to-run-debian-linux-on-an-intel-based-mac-mini/"&gt;mac mini&lt;/a&gt; as my internet router. As you may be aware, the mini has
only one network interface, so I added a second interface using a USB to
ethernet adapter. This adapter was limited to 100 Mbit, so to make full use of
the 120 Mbit connection, I had to upgrade this adapter.&lt;/p&gt;
&lt;p&gt;I took the gamble and bought the &lt;a href="http://www.belkin.com/IWCatProductPage.process?Product_Id=258897"&gt;Belkin Gigabit USB 2.0&lt;/a&gt; adapter. I could
not figure out if it would work with Linux, but on the box it officially
supports Mac OS X, which is always a good sign.&lt;/p&gt;
&lt;p&gt;&lt;a href="/static/images/belkin-gigabit.jpg"&gt;&lt;img alt="" src="/static/images/belkin-gigabit.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This adapter is recognized by Debian Linux without a hitch:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;Mini&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="c1"&gt;# ethtool -i eth0&lt;/span&gt;
&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;asix&lt;/span&gt;
&lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;Jun&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2006&lt;/span&gt;
&lt;span class="n"&gt;firmware&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ASIX&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;AX88178&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;USB&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Ethernet&lt;/span&gt;
&lt;span class="n"&gt;bus&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;usb&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;0000&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;lsusb output:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;Bus&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;005&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Device&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;004&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;050&lt;/span&gt;&lt;span class="nl"&gt;d&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;5055&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Belkin&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Components&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;F5D5055&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Gigabit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Network&lt;/span&gt;
&lt;span class="n"&gt;Adapter&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;AX88xxx&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I did not test the actuall performance of this adapter, but it at least goes
beyond the 100 Mbit (it does 120 Mbit at least). I expect it to be limited at
say max 300 Mbit, being constrained by the maximum speed of  USB 2.0.&lt;/p&gt;</content><category term="Hardware"></category><category term="Uncategorized"></category></entry><entry><title>The minimum requirements for a secure system</title><link href="https://louwrentius.com/the-minimum-requirements-for-a-secure-system.html" rel="alternate"></link><published>2010-11-26T19:42:00+01:00</published><updated>2010-11-26T19:42:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-11-26:/the-minimum-requirements-for-a-secure-system.html</id><summary type="html">&lt;p&gt;The most secure server system is a system that is not connected to a network
and turned off. However, little work seems to be getting done this way. So we
want to turn systems on and connect them to a network, or even (God forbid)
the internet.&lt;/p&gt;
&lt;p&gt;The thing is …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The most secure server system is a system that is not connected to a network
and turned off. However, little work seems to be getting done this way. So we
want to turn systems on and connect them to a network, or even (God forbid)
the internet.&lt;/p&gt;
&lt;p&gt;The thing is this. A system connected to a network without any running
services is almost as as secure as a system that is turned off. They also
share a common property: they are useless. A system starts to get useful if
you start running services on them. And make these services accessible from
the network for clients.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Services&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Security on a technical level is all about securing those services. Every
service that you enable is an opportunity for an attacker to compromise your
system. If a service is not installed or running on your system, it cannot be
used to compromise your server.&lt;/p&gt;
&lt;p&gt;If a service is enabled and accessible through the network, it is logically of
vital importance that you know:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;what does this service do?&lt;/li&gt;
&lt;li&gt;what can it be used for?&lt;/li&gt;
&lt;li&gt;what steps needs to be taken to properly secure it?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you know what a service does, you can understand the potential security
risks. If you understand the product you are using, you can secure it
properly. Security is all about understanding. If you don't understand what
you are running, then it can't be secure.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Firewalls&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;So if you only run required services, why do you need to run a firewall? You
don't. Yes that's right. Think about it. A firewall protects services that
should not be accessible and allows access to services that should be
accessible. If you just disable those services that should not be accessible
from the outside, why use a local firewall? You don't want the Internet to
access the SNMP-service on your system, you say? But then why not bind it only
to the management interface instead of the production interface? You have a
separate management network, right?&lt;/p&gt;
&lt;p&gt;Of course, firewalls are a good thing. They are an ADDITIONAL line of defense.
They mostly protect you against yourself. If you make a mistake and, by
accident, enable some vulnerable service on a system, a properly configured
firewall will prevent access to it and save your behind. That is the purpose
of a firewall.&lt;/p&gt;
&lt;p&gt;People often wrongly see the firewall as the first line of defense. If you do,
you are wrong. The first line of defense is to secure your services.&lt;/p&gt;
&lt;p&gt;The whole point is that there are holes in your firewalls. Those holes allow
access to services. Those services may be necessary, like a web server, but
nevertheless holes. You are exposing services to the Internet.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Web applications (or web-based back doors?)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We are now mostly running web-based applications on the services that we make
accessible for the network or the internet. Those applications run on
application servers. Yes, these application servers, like Apache Tomcat or IIS
ASP.NET need to be secured, but nowadays, they are almost secure by default.&lt;/p&gt;
&lt;p&gt;All security depends on the level of security of the application you are
running on your application server. Is your application written well, with
security principles in mind? Does it protect against SQL-injection or cross-
site scripting? Are sessions predictable? Can a user access data of another
user?&lt;/p&gt;
&lt;p&gt;Firewalls don't protect against vulnerabilities in your web applications. You
need to do it right at the core level: the application itself. Just like how
you harden a system. You must run &lt;a href="http://www.owasp.org/index.php/Category:OWASP_Guide_Project"&gt;secure code&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;And be aware that if you run third-party code, watch out for security news.
There have been many worms exploiting vulnerable commodity software such as
phpBB, Wordpress or similar products.&lt;/p&gt;
&lt;p&gt;This is the really hard part. Deploying secure software and keeping it secure
during the development life cycle.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Patches&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The last fundamental principle of keeping systems secure is keeping up with
security patches. Many security vulnerabilities are often only exploitable
under specific conditions and may not be that important. But the most
important thing is to be aware of vulnerabilities and available patches. Then
you can decide for yourself how to act.&lt;/p&gt;
&lt;p&gt;There is always a risk that a security patch breaks functionality. But that's
not a real problem, because you have this test environment so you can check
first, right?&lt;/p&gt;
&lt;p&gt;Keep up with security patches and non-security patches. If you first have to
install 100+ patches to be able to install the latest high-risk security
patch, something might break. So then it's choosing between staying vulnerable
or going off-line until you have fixed everything.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;So what are the most basic ingredients for secure systems?&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;only run required services&lt;/li&gt;
&lt;li&gt;harden those required services&lt;/li&gt;
&lt;li&gt;deploy a firewall as an additional defense layer&lt;/li&gt;
&lt;li&gt;deploy secure application code&lt;/li&gt;
&lt;li&gt;keep up-to-date with security patches&lt;/li&gt;
&lt;li&gt;Audit and review your systems and application code on a regular basis.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;With this small number of steps, you will be able to protect against a lot of
security threats. I don't say this is everything that is necessary. But it is
a good foundation to build on. You still have to identify risks that may apply
to your particular situation. These risks may require you to take (additional)
measures not discussed here.&lt;/p&gt;</content><category term="Security"></category><category term="Minimum"></category><category term="security"></category><category term="requirements"></category><category term="system"></category><category term="hardening"></category></entry><entry><title>'Linux: using disk labels to counter storage device name changes'</title><link href="https://louwrentius.com/linux-using-disk-labels-to-counter-storage-device-name-changes.html" rel="alternate"></link><published>2010-11-22T07:08:00+01:00</published><updated>2010-11-22T07:08:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-11-22:/linux-using-disk-labels-to-counter-storage-device-name-changes.html</id><summary type="html">&lt;p&gt;My router decided to change the device name for some USB storage devices.  So
/dev/sdc was swapped for /dev/sdd and vice versa. The result was some file
system corruption on /dev/sdc, because it was used on a remote system through
iSCSI, using a different file system from …&lt;/p&gt;</summary><content type="html">&lt;p&gt;My router decided to change the device name for some USB storage devices.  So
/dev/sdc was swapped for /dev/sdd and vice versa. The result was some file
system corruption on /dev/sdc, because it was used on a remote system through
iSCSI, using a different file system from /dev/sdd.&lt;/p&gt;
&lt;p&gt;With regular internal disks, attached with PATA, SATA or SAS, the chances are
very small that such an event will occur, but it is possible, especially if
you start adding/subtracting disks. With USB devices the risk is substantially
bigger.&lt;/p&gt;
&lt;p&gt;To prevent your system from mixing up drives because there device names
change, use file system labels.  All information that follows have been
&lt;a href="http://wikis.sun.com/display/BigAdmin/Using+Disk+Labels+on+Linux+File+Systems"&gt;stolen from this location&lt;/a&gt;. Since this blog is also my personal notepad,
the relevant bits are reproduced here.&lt;/p&gt;
&lt;p&gt;There are three steps involved, the third being optional:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;add a label to the file system&lt;/li&gt;
&lt;li&gt;add the label to /etc/fstab&lt;/li&gt;
&lt;li&gt;update grub boot manager (optional)&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Add a label to the file system&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Setting a label when the file system is created:&lt;/em&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mkfs.ext3 -L ROOT /dev/sda1
mkfs.xfs -L BIGRAID /dev/sde
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Set label for existing file system&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;EXT3:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;e2label /dev/sda1 PRIMARY_ROOT
e2label /dev/sda1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;XFS:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;xfs_admin -L DATA1 /dev/sdf
xfs_admin /dev/sdf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Set label for swap partition&lt;/em&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mkswap -L SWAP0 /dev/sdb5
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;add the label to fstab&lt;/h3&gt;
&lt;p&gt;Example of contents of fstab:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;LABEL=ROOT          /         ext3    defaults        1 1
LABEL=BOOT          /boot     ext3    defaults        1 2
LABEL=SWAP          swap      swap    defaults        0 0
LABEL=HOME          /home     ext3    nosuid,auto     1 2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Update the grub boot manager&lt;/h3&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;title server
root (hd0,0)
  kernel (hd0,0)/vmlinuz ro root=LABEL=SERVER_ROOT0 rhgb quiet
  initrd (hd0,0)/initrd.img
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>'Secure programming: how to implement user account management'</title><link href="https://louwrentius.com/secure-programming-how-to-implement-user-account-management.html" rel="alternate"></link><published>2010-11-18T21:40:00+01:00</published><updated>2010-11-18T21:40:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-11-18:/secure-programming-how-to-implement-user-account-management.html</id><summary type="html">&lt;p&gt;Most web applications work like this:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The application uses a single database account to perform all actions. Users
are just some records in a table. Account privileges and roles are part of
this table, or separate tables.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This implies that all security must be designed and build by the application …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Most web applications work like this:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The application uses a single database account to perform all actions. Users
are just some records in a table. Account privileges and roles are part of
this table, or separate tables.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This implies that all security must be designed and build by the application
developer. I think this is entirely wrong. There is a big risk:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;In such applications, SQL-injection will allow full control of the entire
database.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This is something that is often overlooked. And the solution is simple. The
application should not use a general account with full privileges. The
application should use the database account of the user accessing the
application. All actions performed by this user are thus limited by the
privileges of this database account. The impact of SQL-injection would be
significantly reduced.&lt;/p&gt;
&lt;p&gt;The public part of a website is still using an application account, but the
privileges of this account can be significantly reduced. To obtain elevated
privileges, a user must first authenticate against the application and thus
the database.&lt;/p&gt;
&lt;p&gt;Please understand another benefit: it is not required to store
username/password combinations of privileged accounts on the application
server. The configuration file will only contain the credentials of the
unprivileged account. An attacker compromising the application server with
limited privileges, won't have access to the database with elevated
privileges.&lt;/p&gt;
&lt;p&gt;I understand that this solution requires a bit more work to setup at the
start, but once implemented, it reduces complexity and improves security so
much.&lt;/p&gt;
&lt;p&gt;Of course, the security of your data is as good as the hardening of your
database server. But that's another story.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="application"></category><category term="security"></category><category term="database"></category><category term="accounts"></category></entry><entry><title>Do not buy a hardware RAID controller for home use</title><link href="https://louwrentius.com/do-not-buy-a-hardware-raid-controller-for-home-use.html" rel="alternate"></link><published>2010-11-17T21:00:00+01:00</published><updated>2010-11-17T21:00:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-11-17:/do-not-buy-a-hardware-raid-controller-for-home-use.html</id><summary type="html">&lt;p&gt;Hardware RAID controllers are considered 'the best' solution for high
performance and high availability. However, this is not entirely true. Using a
hardware RAID controller might even endanger your precious data.&lt;/p&gt;
&lt;p&gt;For enterprise environments, where performance is critical, it is more
important that the arrays keeps on delivering data at …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Hardware RAID controllers are considered 'the best' solution for high
performance and high availability. However, this is not entirely true. Using a
hardware RAID controller might even endanger your precious data.&lt;/p&gt;
&lt;p&gt;For enterprise environments, where performance is critical, it is more
important that the arrays keeps on delivering data at a high speed.
Professional RAID controllers use &lt;a href="http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery"&gt;TLER&lt;/a&gt; with TLER-enabled disks to limit
the time spend on recovering bad sectors. If a disk encounters a bad sector,
there is no time to pause and try to fix it. The disk is just dropped out of
the RAID array after just a couple of seconds. At that moment, the array still
performes relatively well, but there is no redundancy. If another disk fails
(another bad sector?) the array is lost, with all its data.&lt;/p&gt;
&lt;p&gt;More people are building NAS boxes for centralized storage of data, for
private home use. Since disks are cheap, it is possible to create lots of
storage capacity for little money. Creating backups of terabytes of data is
however not cheap. Or you have to create two NAS boxes. But that is very
expensive and not worth the effort.&lt;/p&gt;
&lt;p&gt;People seem to spend lots of money on expensive enterprise level hardware RAID
cards, not understanding that the whole TLER-mechanism causes an increased
risk for their data. In enterprise environments, budgets are relatively big,
and data is always backed up. They can afford to take the risk of losing a
RAID array due to these backups. But consumers often don't have the money to
spend on creating backups of terabytes of data. They just go for RAID 5 or
RAID 6 and hope for the best.&lt;/p&gt;
&lt;p&gt;For consumers, if the RAID array goes, all data is lost.&lt;/p&gt;
&lt;p&gt;So consumers should choose a RAID solution that will do its best to recover
from hardware failure. Performance is not so much an issue. Reliability is. So
consumers do want disks to spend 'ages' on recovering bad sectors if this
means that the RAID array will survive.&lt;/p&gt;
&lt;p&gt;Linux software RAID and ZFS do not use TLER and therefore are a safer choice
for your data then regular hardware RAID controllers. You may still use such
controllers (but please test them properly) but only to provide SATA ports
with individual disks, the RAID part should be handled by Linux.&lt;/p&gt;
&lt;p&gt;So in my opinion, hardware RAID controllers are more expensive, require more
expensive (enterprise) disks and are less safe for your data.&lt;/p&gt;</content><category term="Storage"></category><category term="Uncategorized"></category></entry><entry><title>Linux network interface bonding / trunking or how to get beyond 1 Gb/s</title><link href="https://louwrentius.com/linux-network-interface-bonding-trunking-or-how-to-get-beyond-1-gbs.html" rel="alternate"></link><published>2010-11-11T22:40:00+01:00</published><updated>2010-11-11T22:40:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-11-11:/linux-network-interface-bonding-trunking-or-how-to-get-beyond-1-gbs.html</id><summary type="html">&lt;p&gt;This article discusses Linux bonding and how to achieve 2 Gb/s transfer speeds
with a single TCP/UDP connection.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;UPDATE July 2011&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Due to hardware problems, I was not able to achieve transfer speeds&lt;/em&gt; 
&lt;em&gt;beyond 150 MB/s. By replacing a network card with one from another&lt;/em&gt;
&lt;em&gt;vendor (HP …&lt;/em&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;This article discusses Linux bonding and how to achieve 2 Gb/s transfer speeds
with a single TCP/UDP connection.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;UPDATE July 2011&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Due to hardware problems, I was not able to achieve transfer speeds&lt;/em&gt; 
&lt;em&gt;beyond 150 MB/s. By replacing a network card with one from another&lt;/em&gt;
&lt;em&gt;vendor (HP Broadcom) I managed to obtain 220 MB/s which is about 110 MB/s&lt;/em&gt; 
&lt;em&gt;per network interface.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;So I am now able to copy a single file with the 'cp' command over an NFS share&lt;/em&gt; 
&lt;em&gt;with 220 MB/s.&lt;/em&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;strong&gt;Update January 2014&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;See &lt;a href="https://louwrentius.com/achieving-340-mbs-network-file-transfers-using-linux-bonding.html"&gt;this new article&lt;/a&gt; on how I got 340 MB/s transfer speeds.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;I had problems with a intel e1000e PCIe card in an intel DH67BL. I tested with
different e1000e PCIe models but to no avial. RX was 110 MB/s. TX was always no 
faster than 80 MB/s. A HP Broadcom gave no problems and also provided 110 MB/s
for RX traffic. LSCPI output:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;The on-board e1000e NIC performed normal, all PCIe e1000e cards with different
chipsets never got above 80 MB/s.&lt;/p&gt;
&lt;p&gt;A gigabit network card provides about 110 MB/s (megabytes) of bandwidth. If
you want to go faster, the options are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;buy infiniband stuff: I have no experience with it, may be smart thing to
do but seems expensive.&lt;/li&gt;
&lt;li&gt;buy 10Gigabit network cards: very very expensive compared to other
solutions.&lt;/li&gt;
&lt;li&gt;strap multiple network interfaces together to get 2 Gb/s or more with
more cards.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This article is discussing the third option. Teaming or bonding two network
cards to a single virtual card that provides twice the bandwidth will provide
you with that extra performance that you where looking for. But the 64000 
dollar question is:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;How to obtain 2 Gb/s with a &lt;strong&gt;single &lt;/strong&gt;transfer? Thus with a single TCP
connection?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Answer:&lt;/em&gt; The trick is to use Linux network bonding.&lt;/p&gt;
&lt;p&gt;Most bonding options only provide an accumulated performance of 2 Gb/s, by
balancing different network connections over different interfaces. Individual
transfers will never reach beyond 1 Gbit/s but it is possible to have two 1
Gb/s transfers going on at the same time.&lt;/p&gt;
&lt;p&gt;That is not what I was looking for. I want to copy a file using NFS and just
get more than just 120 MB/s.&lt;/p&gt;
&lt;p&gt;The only bonding mode that supports single TCP or UDP connections to go beyond
1 Gb/s is mode 0: Round Robin. This bonding mode is kinda like RAID 0 over two
or more network interfaces.&lt;/p&gt;
&lt;p&gt;However, you cannot use Round Robin with a standard switch. You need an
advanced switch that is capable of creating "trunks". A trunk is a virtual
network interface, that consists of individual ports that are grouped
together". So you cannot use Round Robin mode with an average unmanaged
switch. The only other option is to use direct cables between two hosts,
although I didn't tested this.&lt;/p&gt;
&lt;h3&gt;Results&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;UPDATE July 2011&lt;/em&gt; : 
Read the update at the top.&lt;/p&gt;
&lt;p&gt;Now the results: I was able to obtain a transferspeed (read) of 155 MB/s with
a file copy using NFS. Normal transfers capped at 109 MB/s. To be honest: I
had hoped to achieve way more, like 180MB/s. However, the actual transfer
speeds that will be obtained will depend on the hardware used. I recommend
using Intel or Broadcom hardware for this purpose.&lt;/p&gt;
&lt;p&gt;Also, I was not able to obtain write speed that surpasses the 1 Gb/s. Since I
used a fast RAID array to write the data to, the underlying storage subsystem
was not the bottleneck.&lt;/p&gt;
&lt;p&gt;So the bottom line is that it is possible to get more than 1 Gb/s but the
performance gain is not as high as you may want to.&lt;/p&gt;
&lt;h3&gt;Configuration:&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Client:&lt;/em&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;modprobe bonding mode=0
ifconfig bond0 up
ifenslave bond0 eth0 eth1
ifconfig bond0 10.0.0.1 netmask 255.255.255.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Server:&lt;/em&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;modprobe bonding mode=4 lacp_rate=0 xmit_hash_policy=layer3+4
ifconfig bond0 up
ifenslave bond0 eth0 eth1
ifconfig bond0 10.0.0.2 netmask 255.255.255.0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Bonding status:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nx"&gt;cat&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;proc&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;net&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;bonding&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;bond0&lt;/span&gt;

&lt;span class="nx"&gt;Ethernet&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Channel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Bonding&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Driver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;v3&lt;/span&gt;&lt;span class="m m-Double"&gt;.3.0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;June&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2008&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;Bonding&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;IEEE&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="m m-Double"&gt;802.3&lt;/span&gt;&lt;span class="nx"&gt;ad&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Dynamic&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;link&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;aggregation&lt;/span&gt;
&lt;span class="nx"&gt;Transmit&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Hash&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Policy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;layer3&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;MII&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;up&lt;/span&gt;
&lt;span class="nx"&gt;MII&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Polling&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Interval&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;
&lt;span class="nx"&gt;Up&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Delay&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="nx"&gt;Down&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Delay&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;

&lt;span class="m m-Double"&gt;802.3&lt;/span&gt;&lt;span class="nx"&gt;ad&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;info&lt;/span&gt;
&lt;span class="nx"&gt;LACP&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;rate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;slow&lt;/span&gt;
&lt;span class="nx"&gt;Active&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Aggregator&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Info&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="nx"&gt;Aggregator&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="nx"&gt;Number&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="nx"&gt;Actor&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;9&lt;/span&gt;
&lt;span class="nx"&gt;Partner&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;26&lt;/span&gt;
&lt;span class="nx"&gt;Partner&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Mac&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Address&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;de&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;ad&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;be&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;ef&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;

&lt;span class="nx"&gt;Slave&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Interface&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;eth0&lt;/span&gt;
&lt;span class="nx"&gt;MII&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;up&lt;/span&gt;
&lt;span class="nx"&gt;Link&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Failure&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="nx"&gt;Permanent&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;HW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kd"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;co&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;ff&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;ee&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;aa&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;
&lt;span class="nx"&gt;Aggregator&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;

&lt;span class="nx"&gt;Slave&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Interface&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;eth1&lt;/span&gt;
&lt;span class="nx"&gt;MII&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;up&lt;/span&gt;
&lt;span class="nx"&gt;Link&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Failure&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="nx"&gt;Permanent&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;HW&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kd"&gt;addr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;00&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;de&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;ca&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;fe&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;b1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt;
&lt;span class="nx"&gt;Aggregator&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="Networking"></category><category term="linux"></category><category term="bonding"></category><category term="trunking"></category><category term="gigabit"></category><category term="2gb"></category></entry><entry><title>The iPhone, iPad and iOS are powering a revolution</title><link href="https://louwrentius.com/the-iphone-ipad-and-ios-are-powering-a-revolution.html" rel="alternate"></link><published>2010-11-06T01:12:00+01:00</published><updated>2010-11-06T01:12:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-11-06:/the-iphone-ipad-and-ios-are-powering-a-revolution.html</id><summary type="html">&lt;p&gt;Most people just don't understand computers. Are these people dumb? Some may
be dumb, but the people who make them are maybe even dumber. Because they
can't seem to figure out how to create a computer that the majority of people
understand.&lt;/p&gt;
&lt;p&gt;When the original macintosh arrived at the stage …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Most people just don't understand computers. Are these people dumb? Some may
be dumb, but the people who make them are maybe even dumber. Because they
can't seem to figure out how to create a computer that the majority of people
understand.&lt;/p&gt;
&lt;p&gt;When the original macintosh arrived at the stage back in the eighties,
computers became a bit more human-friendly, but it was limited to the
constraints of the then available hardware. It put away the text-based
interface and introduced the graphic interface. It used the desktop metaphor
to create this graphic environment. But this metaphor has had its day.&lt;/p&gt;
&lt;p&gt;Many people don't understand the desktop metaphor since they don't have a
desktop and have never used one. Also, it is a metaphor, it's to translate the
computer environment to something humans understand. But what if they don't
understand the metaphor? For example, many people just don't 'get' the Windows
Explorer or the Mac OS X finder. The desktop metaphor does not seem to fit in
how people think.&lt;/p&gt;
&lt;p&gt;Every time you see a person enter a URL like www.youtube.com in the google
search field, you will realize that we still have a long way to go.&lt;/p&gt;
&lt;p&gt;Most people did not seem to realize back then that the release of the iPhone
wasn't that important, but the release of iOS. The iPhone was the first
smartphone (a word most people are not familiar with) that did away with a
stylus or hardware keyboard. It uses what is closest to us: our fingers. A
totally new user interface, one that is very natural and close to us, is now
available.&lt;/p&gt;
&lt;p&gt;Using touch as input required a total redesign of the entire user interface.
All other interfaces were designed around hardware keyboard and mouse devices.
Fingers are big, and are obstructing the view. But it allows for a more direct
interaction with a device. And now all new smart phones sport a touch
interface.&lt;/p&gt;
&lt;p&gt;Rumors of an Apple tabled existed for long, but it was very clear when the
iPhone was released that if Apple would release a tablet, it would run this
new iOS operating system.&lt;/p&gt;
&lt;p&gt;When the iPad was released, it became an instant hit. As of today, there is no
device on the market that can be truly called a competitor. But why is this
so? The ground work has been done by the iPhone. Most people with an iPhone
will notice that aside from some performance issues in the past, the device
just always worked. It was instantly available to sent an email, look
something up on wikipedia or find the nearest Starbucks. An iPhone just always
works. No boot. Very reliable. And an interface that makes you happy.&lt;/p&gt;
&lt;p&gt;Why does iOS make people happy? Because it provides a user interface that is
human. People understand it instinctively. Any person of any age or background
will be able to use an iOS device within minutes. The interface doesn't make
you look like you are dumb because you just don't understand how it works. It
not only works, it is easy to use and you are not afraid to break anything.&lt;/p&gt;
&lt;p&gt;The iPhone and the iPad are learning a lot of people not to fear computers.&lt;/p&gt;
&lt;p&gt;The iOS does away with the old desktop metaphor, but so does Symbian or
similar interfaces. It is the combination with touch and the well thought out
interface that sets it apart from other mobile operating systems. Even when
the iOS platform did not have native applications, people still bought it and
not only because Apple released a new shiny toy.&lt;/p&gt;
&lt;p&gt;However, the app store on iOS has created a very special and important
environment. People can finally install and remove applications in an
extremely simple way. They don't need to be scared that some program will
crash your computer either while installing it, using it, or removing it. The
whole iOS ecosystem creates an environment wherein people don't need any help
any longer from other people. They are finally in control. They don't need to
be afraid of their computer.&lt;/p&gt;
&lt;p&gt;This trend will affect the old-school user interfaces such as Mac OS X. How it
will turn out is anybodies guess. But there is at least a small trend to
'eradicate' the finder as much as possible. iPhoto stores your photos. iTunes
stores your music. If you want to include a photo or song within an
application, you pick the photo or song in question from a miniature iPhoto or
iTunes interface. There is no finder anymore. The finder is disappearing from
the workflow. And why not? If programs are written well, why bother with it?
The finder should be abstracted away, as is the case on iOS, where you don't
have a finder.&lt;/p&gt;
&lt;p&gt;Another thing is multitasking, you know, that stuf we like to do, but cant. We
can only do one thing at a time. What we do want is fast task switching, not
multitasking. Sure, some programs must be running in de background, to
continue to operate, such as a chat program, but that is not the point. Most
people are just going crazy if you show how multitasking works, with different
windows. Again, iOS shows how 'multitasking' should be implemented. It is
implemented as fast application switching, allowing these applications to
register services that must continue to run, while the application itself
freezes when the user switches to another application. People tend to use one
application at a time and especially on mobile devices, every single bit of
screen real estate counts, so they are always running full screen. This full
screen notion will also be incorporated in the next Mac OS X release, Lion.
People switch, but do one thing at a time.&lt;/p&gt;
&lt;p&gt;Computer nerds tend to feel superior to people who don't have much skill using
a computer. This feeling of superiority is totally misplaced. They should be
really humble. because up until the advent of iOS, nobody was able to create a
human friendly computer interface. It is not the lack of understanding on the
side of computer users, it is the lack of  understanding on the part of the
computer nerds on how normal humans think and act.&lt;/p&gt;
&lt;p&gt;Simple, human friendly computer interfaces will liberate humanity from those
pesky computer nerds. And that will cause a bit less suffering in the world I
hope.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="ipad"></category><category term="iphone"></category><category term="user-friendly"></category><category term="human"></category><category term="ios"></category><category term="apple"></category></entry><entry><title>Apple is killing off the optical drive just like the floppy disk</title><link href="https://louwrentius.com/apple-is-killing-off-the-optical-drive-just-like-the-floppy-disk.html" rel="alternate"></link><published>2010-10-23T11:05:00+02:00</published><updated>2010-10-23T11:05:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-10-23:/apple-is-killing-off-the-optical-drive-just-like-the-floppy-disk.html</id><summary type="html">&lt;p&gt;With the release of the new MacBook Air we are one step closer to killing off
the cd-rom and the dvd. As with the previous MacBook Air, this device has no
optical drive. And that is a good thing. People do not need an optical drive.
You have the network …&lt;/p&gt;</summary><content type="html">&lt;p&gt;With the release of the new MacBook Air we are one step closer to killing off
the cd-rom and the dvd. As with the previous MacBook Air, this device has no
optical drive. And that is a good thing. People do not need an optical drive.
You have the network and you have USB disks. They are faster, more reliable
and have more capacity.&lt;/p&gt;
&lt;p&gt;I may expect that in the upcoming years this trend may continue with the other
laptops. Just as Apple killed the floppy disk, it is killing the optical
drive, one step at a time. I'd rather have a smaller and lighter laptop or
more disk or battery capacity, than an optical drive. So I hope this is a
trend that will continue and all other manufacturers will follow.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Linux Software RAID benchmarking script</title><link href="https://louwrentius.com/linux-software-raid-benchmarking-script.html" rel="alternate"></link><published>2010-09-29T20:58:00+02:00</published><updated>2010-09-29T20:58:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-09-29:/linux-software-raid-benchmarking-script.html</id><summary type="html">&lt;p&gt;Just a small post.&lt;/p&gt;
&lt;p&gt;To benchmark your Linux software RAID array as setup with MDADM, please use my
&lt;a href="/files/raid-tester.sh"&gt;new benchmark script&lt;/a&gt;. I used this script to create &lt;a href="https://louwrentius.com/blog/2010/05/linux-raid-level-and-chunk-size-the-benchmarks/"&gt;these results&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You may need to configure some values within the header of this file to make
it fit your enviroment.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DEVICES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;/dev …&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</summary><content type="html">&lt;p&gt;Just a small post.&lt;/p&gt;
&lt;p&gt;To benchmark your Linux software RAID array as setup with MDADM, please use my
&lt;a href="/files/raid-tester.sh"&gt;new benchmark script&lt;/a&gt;. I used this script to create &lt;a href="https://louwrentius.com/blog/2010/05/linux-raid-level-and-chunk-size-the-benchmarks/"&gt;these results&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You may need to configure some values within the header of this file to make
it fit your enviroment.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DEVICES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;/dev/sd[a-f]&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;NO_OF_DEVICES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ARRAY&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;dev&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;md5&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;CHUNKS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;4 8 16 32 64 128 256 512 1024&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MOUNT&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="n"&gt;storage&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;LOG&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="k"&gt;var&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;log&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;raid&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;LOGDEBUG&lt;/span&gt;&lt;span class="o"&gt;=/&lt;/span&gt;&lt;span class="k"&gt;var&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nb"&gt;log&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;raid&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;debug&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;LEVEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;0 5 6 10&amp;quot;&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;TESTFILE&lt;/span&gt;&lt;span class="o"&gt;=$&lt;/span&gt;&lt;span class="n"&gt;MOUNT&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bin&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;TESTFILESIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IN&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;MB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;thus&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;GB&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;TRIES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;how&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;many&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;times&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;benchmark&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;By default, the script wil format the array using XFS, feel free to format
it with another filesystem such as EXT4 or EXT3 or whatever you want to test.&lt;/p&gt;</content><category term="Storage"></category><category term="Uncategorized"></category></entry><entry><title>'Zabbix Security: client-server communication seems insecure'</title><link href="https://louwrentius.com/zabbix-security-client-server-communication-seems-insecure.html" rel="alternate"></link><published>2010-09-27T22:35:00+02:00</published><updated>2010-09-27T22:35:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-09-27:/zabbix-security-client-server-communication-seems-insecure.html</id><summary type="html">&lt;p&gt;Zabbix is a populair tool for monitoring servers, services and network
equipment. For monitoring hosts, Zabbix provides an agent that can be
installed on the hosts that must be monitored.&lt;/p&gt;
&lt;p&gt;Based on the supplied documentation and &lt;a href="http://www.zabbix.com/forum/archive/index.php/t-625.html"&gt;some remarks&lt;/a&gt; on the internets,
the 'security' of Zabbix agents seems to rely on …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Zabbix is a populair tool for monitoring servers, services and network
equipment. For monitoring hosts, Zabbix provides an agent that can be
installed on the hosts that must be monitored.&lt;/p&gt;
&lt;p&gt;Based on the supplied documentation and &lt;a href="http://www.zabbix.com/forum/archive/index.php/t-625.html"&gt;some remarks&lt;/a&gt; on the internets,
the 'security' of Zabbix agents seems to rely on an IP-filter. It only accepts
traffic from a specific IP-address. However, the protocol that is used between
the Zabbix server and agents is unencrypted and does not seem to employ any
additional authentication.&lt;/p&gt;
&lt;p&gt;With a man-in-the-middle attack, pretending to be the Zabbix server, you would
be able to compromise all servers running Zabbix. If remote commands are
enabled on these hosts, the damage that could be done may be something you
don't want to think about. Or maybe you do. Although it is true that for such
an attack to be possible, as an attacker you need access to a system within
the same network (VLAN) as the server, but none the less, it is just not
secure.&lt;/p&gt;
&lt;p&gt;Personally I don't think that Zabbix is suitable for high-security
environments, due to the lack of encryption of sensitive data and the weak
authentication mechanism.&lt;/p&gt;
&lt;p&gt;Zabbix should employ at least SSL as a means for encrypted transport and use a
password or shared secret for authentication. Even better would be the use of
client-side certificates such as implemented by the system management tool
Puppet.&lt;/p&gt;
&lt;p&gt;[update]&lt;/p&gt;
&lt;p&gt;Please note that Nagios agents also seem to work this way, but I have no
experience with Nagios so I can't say for sure.&lt;/p&gt;
&lt;p&gt;And Nagios is widely deployed in the enterprise...&lt;/p&gt;</content><category term="Uncategorized"></category><category term="zabbix"></category><category term="security"></category></entry><entry><title>Debian Lenny and Dell R410 network card not supported</title><link href="https://louwrentius.com/debian-lenny-and-dell-r410-network-card-not-supported.html" rel="alternate"></link><published>2010-08-20T19:53:00+02:00</published><updated>2010-08-20T19:53:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-08-20:/debian-lenny-and-dell-r410-network-card-not-supported.html</id><summary type="html">&lt;p&gt;For those who are running Debian Lenny and want to order the new Dell R410
server, beware!&lt;/p&gt;
&lt;p&gt;There is no safe solution to get Debian Lenny working with the on-board
Broadcom network cards. A fairly recent kernel is required. Basically, you
will have to install back-ported kernels, more recent modules …&lt;/p&gt;</summary><content type="html">&lt;p&gt;For those who are running Debian Lenny and want to order the new Dell R410
server, beware!&lt;/p&gt;
&lt;p&gt;There is no safe solution to get Debian Lenny working with the on-board
Broadcom network cards. A fairly recent kernel is required. Basically, you
will have to install back-ported kernels, more recent modules and thus must
violate the reasons why you were running Debian Lenny in the first place.&lt;/p&gt;
&lt;p&gt;There is one solution, although it may not be an option: run VMware on the
hardware and run Debian Lenny in a virtual machine. I think that in most
cases, this will be sufficient for many cases and it has all the benefits of
virtualisation and the stability of Debian Lenny.&lt;/p&gt;
&lt;p&gt;It is unfortunate that Broadcom or Dell do not support Debian.&lt;/p&gt;
&lt;p&gt;If you do have an easy and quite safe solution to get the Broadcom network
cards working with Debian Lenny, please drop a note.&lt;/p&gt;
&lt;p&gt;Beware though: the H200 and H700 RAID controllers are also &lt;a href="https://louwrentius.blogspot.com/2010/07/debian-lenny-and-new-"&gt;not supported by
Debian Lenny&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;dell-s300-h200-and.html&lt;/p&gt;</content><category term="Hardware"></category><category term="Uncategorized"></category></entry><entry><title>Solaris is an obsolete platform</title><link href="https://louwrentius.com/solaris-is-an-obsolete-platform.html" rel="alternate"></link><published>2010-08-14T22:55:00+02:00</published><updated>2010-08-14T22:55:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-08-14:/solaris-is-an-obsolete-platform.html</id><summary type="html">&lt;p&gt;Assuming that the rumor is true and OpenSolaris will be slain by Oracle, we
must conclude that the Solaris operating system is obsolete. Solaris can be
considered legacy. Sun was a hardware shop and to sell their hardware, they
needed a great operating system.&lt;/p&gt;
&lt;p&gt;Sun had a great operating system …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Assuming that the rumor is true and OpenSolaris will be slain by Oracle, we
must conclude that the Solaris operating system is obsolete. Solaris can be
considered legacy. Sun was a hardware shop and to sell their hardware, they
needed a great operating system.&lt;/p&gt;
&lt;p&gt;Sun had a great operating system. And the Solaris platform was popular for a
long time. And I think that was for the right reasons, at that time. If you or
your company is still running on a Solaris platform, it may be time to rethink
this strategy.&lt;/p&gt;
&lt;p&gt;I do not understand why Oracle bought Sun. Oracle sells software. Sun sells
hardware. Sun had some great products, like Java, so I can see some reasons.
In the past, Solaris and Oracle had a tight relationship. But the only thing
Oracle may be doing right now is a vendor lock-in strategy, where you are
totally dependent  on both hardware and software from Oracle.&lt;/p&gt;
&lt;p&gt;But p&lt;a href="http://www.crn.com/news/channel-programs/226500112/sun-oracle-vars-differ-over-future-of-sun-hardware.htm"&gt;eople don't seem to buy this&lt;/a&gt;, literally. People do want to continue
to run Solaris, because thats the platform the've invested in. But they don't
want to pay for those exotic Solaris Sparc systems, often way more expensive
than commodity x86 hardware.&lt;/p&gt;
&lt;p&gt;Oracle invested bilions in Sun assets. How are they going to make money of it?
Squeeze out existing Sun Solaris customers who are depending on their
platform?&lt;/p&gt;
&lt;p&gt;If you are setting up a new business or if you think you can pull this off:
stay away from this legacy platform. Migrate away from Solaris. Use an open
platform that does not lock you in.&lt;/p&gt;
&lt;p&gt;And &lt;a href="http://utcc.utoronto.ca/~cks/space/blog/solaris/OracleSunFuture"&gt;this is also a very interesting&lt;/a&gt; read.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>The future of ZFS now that OpenSolaris is dead</title><link href="https://louwrentius.com/the-future-of-zfs-now-that-opensolaris-is-dead.html" rel="alternate"></link><published>2010-08-14T10:51:00+02:00</published><updated>2010-08-14T10:51:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-08-14:/the-future-of-zfs-now-that-opensolaris-is-dead.html</id><summary type="html">&lt;p&gt;With the probable loss of OpenSolaris, there may be another, maybe more
devastating loss.&lt;/p&gt;
&lt;p&gt;The very popular and very advanced Zetabyte File System (ZFS)&lt;/p&gt;
&lt;p&gt;The only open source platform that actively supports ZFS is FreeBSD. And they
just 'copied' the code from OpenSolaris. Are they able to maintain and further …&lt;/p&gt;</summary><content type="html">&lt;p&gt;With the probable loss of OpenSolaris, there may be another, maybe more
devastating loss.&lt;/p&gt;
&lt;p&gt;The very popular and very advanced Zetabyte File System (ZFS)&lt;/p&gt;
&lt;p&gt;The only open source platform that actively supports ZFS is FreeBSD. And they
just 'copied' the code from OpenSolaris. Are they able to maintain and further
develop ZFS on their own? I don't think the community can handle a thing like
that. Development on ZFS will severely be hampered and will not continue in
the pace it did.&lt;/p&gt;
&lt;p&gt;It is also clear that Oracle doesn't give a shit about open source or open
operating systems. That is ok with me, everyone is entitled to their own
opinion. But keep this in mind when you decide to use any Oracle product
whatsoever.&lt;/p&gt;
&lt;p&gt;It's not that I'm suggesting that you should not buy Oracle stuff. I have no
grudge against Oracle in any way, it is just an objective observation, just be
aware of this issue.&lt;/p&gt;
&lt;p&gt;From the perspective of Oracle: what is their benefit regarding OpenSolaris? I
understand their decision, but its sad nevertheless. And I'm really scared for
the future of ZFS.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>RAID 5 vs. RAID 6 or do you care about your data?</title><link href="https://louwrentius.com/raid-5-vs-raid-6-or-do-you-care-about-your-data.html" rel="alternate"></link><published>2010-08-13T15:06:00+02:00</published><updated>2010-08-13T15:06:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-08-13:/raid-5-vs-raid-6-or-do-you-care-about-your-data.html</id><summary type="html">&lt;p&gt;Storage is cheap. Lots of storage with 10+ hard drives is still cheap. Running
10 drives increases the risk of a drive failure tenfold. So often RAID 5 is
used to keep your data up and running if one single disks fails.&lt;/p&gt;
&lt;p&gt;But disks are so cheap and storage arrays …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Storage is cheap. Lots of storage with 10+ hard drives is still cheap. Running
10 drives increases the risk of a drive failure tenfold. So often RAID 5 is
used to keep your data up and running if one single disks fails.&lt;/p&gt;
&lt;p&gt;But disks are so cheap and storage arrays are getting so vast that RAID 5 does
not cut it anymore. With larger arrays, the risk of a second drive failure
while your failed array is in a degraded state (a drive already failed and the
array is rebuilding or waiting for a replacement), is serious.&lt;/p&gt;
&lt;p&gt;RAID 6 uses two parity disks, so you loose two disks of capacity, but the
rewards in terms of availability are very large. Especially regarding larger
arrays.&lt;/p&gt;
&lt;p&gt;I found &lt;a href="http://blog.kj.stillabower.net/?p=93"&gt;a blog posting&lt;/a&gt; that showed the results on a big simulation run on
the reliability of various RAID setups. One picture of this post is important
and it is shown below. This picture shows the risk of the entire RAID array
failing before 3 years.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://blog.kj.stillabower.net/wp-content/uploads/2009/08/3fail.png"&gt;&lt;img alt="" src="http://blog.kj.stillabower.net/wp-content/uploads/2009/08/3fail.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;From this picture, the difference between RAID 5 and RAID 6 regarding
reliability (availability) is astounding. There is a strong relation with the
size of the array (number of drives) and the increased risk that more than one
drive fails, thus destroying the array. Notice the strong contrast with RAID
6.&lt;/p&gt;
&lt;p&gt;Even with a small RAID 5 array of 6 disks, there is already a 1 : 10 chance
that the array will fail within 3 years. Even with 60+ drives, a RAID 6 array
never comes close to a risk like that.&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;Creating larger RAID 5 arrays beyond 8 to 10 disks means there is a 1 : 8 to 1&lt;/dt&gt;
&lt;dd&gt;5 chance that you will have to recreate the array and restore the contents
from backup (which you have of course).&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;I have a 20 disk RAID 6 running at home. Even with 20 disks, the risk that the
entire array fails due to failure of more than 2 disks is very small. It is
more likely that I lose my data due to failure of a RAID controller,
motherboard or PSU than dying drives.&lt;/p&gt;
&lt;p&gt;There are more graphs that are worth viewing, so &lt;a href="http://blog.kj.stillabower.net/?p=93"&gt;take a look at this excelent
blog post.&lt;/a&gt;&lt;/p&gt;</content><category term="Storage"></category><category term="Uncategorized"></category></entry><entry><title>'A Moment Of Silence: OpenSolaris Is Dead'</title><link href="https://louwrentius.com/a-moment-of-silence-opensolaris-is-dead.html" rel="alternate"></link><published>2010-08-04T15:39:00+02:00</published><updated>2010-08-04T15:39:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-08-04:/a-moment-of-silence-opensolaris-is-dead.html</id><summary type="html">&lt;p&gt;The last release of OpenSolaris dates back to July 2009. The next release was
scheduled for March 2010, but Oracle did not release anything. It is dead
silent around OpenSolaris.&lt;/p&gt;
&lt;p&gt;On 12 July 2010, the OpenSolaris Government Board &lt;a href="http://www.h-online.com/open/news/item/OpenSolaris-governing-board-threatens-dissolution-1037134.html"&gt;sent out an ultimatum&lt;/a&gt;
to Oracle: "please communicate with us or we …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The last release of OpenSolaris dates back to July 2009. The next release was
scheduled for March 2010, but Oracle did not release anything. It is dead
silent around OpenSolaris.&lt;/p&gt;
&lt;p&gt;On 12 July 2010, the OpenSolaris Government Board &lt;a href="http://www.h-online.com/open/news/item/OpenSolaris-governing-board-threatens-dissolution-1037134.html"&gt;sent out an ultimatum&lt;/a&gt;
to Oracle: "please communicate with us or we will resign and hand over the
little power we have back to Oracle".&lt;/p&gt;
&lt;p&gt;These are all signs that OpenSolaris has no real future. And think about it:
what is Oracle's interest in OpenSolaris? Maintaining an operating system and
develop new features is extremely expensive. What is their benefit? I can't
see any. Sun was a hardware manufacturer and promoting OpenSolaris made sure
that people got experience with the operating system that powers their
hardware.&lt;/p&gt;
&lt;p&gt;Unless your environment already consists of Solaris-based systems, as of 2010,
there is no reason to use Solaris any more. OpenSolaris was, until recent, the
only operating system that supported ZFS natively, so the only choice if you
really wanted to use ZFS. Now FreeBSD has native support for ZFS too.&lt;/p&gt;
&lt;p&gt;For the people who are fanatical about ZFS this is great, because it would be
a shame if ZFS could only be used in combination with a dead operating system
that supports less hardware than Mac OS X.&lt;/p&gt;
&lt;p&gt;But I see no future in OpenSolaris and you should think twice about running
it, either as a production platform or at home. Use a platform that has a
large user-base and a big community.&lt;/p&gt;
&lt;p&gt;I think that if you are running OpenSolaris, you must start thinking about
migrating to FreeBSD or Linux, however painful that may be.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Compiling Handbrake CLI on Debian Lenny</title><link href="https://louwrentius.com/compiling-handbrake-cli-on-debian-lenny.html" rel="alternate"></link><published>2010-08-03T19:31:00+02:00</published><updated>2010-08-03T19:31:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-08-03:/compiling-handbrake-cli-on-debian-lenny.html</id><summary type="html">&lt;p&gt;In this post I will show you how to compile Handbrake for Debian Lenny. Please
note that although the Handbrake GUI version does compile on Lenny, it crashes
with a segmentation fault like this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Gtk: gtk_widget_size_allocate(): attempt to allocate widget with width -5
and height 17&lt;/p&gt;
&lt;p&gt;(ghb:1053): GStreamer-CRITICAL **: gst_element_set_state …&lt;/p&gt;&lt;/blockquote&gt;</summary><content type="html">&lt;p&gt;In this post I will show you how to compile Handbrake for Debian Lenny. Please
note that although the Handbrake GUI version does compile on Lenny, it crashes
with a segmentation fault like this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Gtk: gtk_widget_size_allocate(): attempt to allocate widget with width -5
and height 17&lt;/p&gt;
&lt;p&gt;(ghb:1053): GStreamer-CRITICAL **: gst_element_set_state: assertion
`GST_IS_ELEMENT (element)'   failed&lt;/p&gt;
&lt;p&gt;(ghb:1053): GStreamer-CRITICAL **: gst_element_set_state: assertion
`GST_IS_ELEMENT (element)'  failed&lt;/p&gt;
&lt;p&gt;(ghb:1053): GLib-GObject-CRITICAL **: g_object_get: assertion `G_IS_OBJECT
(object)' failed&lt;/p&gt;
&lt;p&gt;Segmentation fault&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;So this post only describes how to compile the command-line version of
Handbrake: HandBrakeCLI.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Issue the following apt-get commando to install all required libraries and
software:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;apt-get install subversion yasm build-essential autoconf libtool zlib1g-dev
libbz2-dev intltool libglib2.0-dev libpthread-stubs0-dev&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download the source code at
http://sourceforge.net/projects/handbrake/files/&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Extract the source code and cd into the new handbrake directory.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Compile handbrake like this:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;./configure --disable-gtk --launch --force --launch-jobs=2&lt;/p&gt;
&lt;p&gt;The --launch-jobs parameter determines how many parallel threads are used for
compiling Handbrake, based on the number of CPU cores of your system. If you
have a quad-core CPU you should set this value to 4.&lt;/p&gt;
&lt;p&gt;The resulting binary is called HandBrakeCLI and can be found in the ./build
directory. Issue a 'make install' to install this binary onto your system.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Debian Lenny and Dell S300 H200 and H700 RAID controllers</title><link href="https://louwrentius.com/debian-lenny-and-dell-s300-h200-and-h700-raid-controllers.html" rel="alternate"></link><published>2010-07-10T21:50:00+02:00</published><updated>2010-07-10T21:50:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-07-10:/debian-lenny-and-dell-s300-h200-and-h700-raid-controllers.html</id><summary type="html">&lt;p&gt;update november 2010: it is reported in the comments that the next release
will support these controllers. However, which controllers are supported is
not clear. Also, we may have to wait for quite some time before squeeze will
be released.&lt;/p&gt;
&lt;p&gt;Just a quick note: it seems that the new Dell …&lt;/p&gt;</summary><content type="html">&lt;p&gt;update november 2010: it is reported in the comments that the next release
will support these controllers. However, which controllers are supported is
not clear. Also, we may have to wait for quite some time before squeeze will
be released.&lt;/p&gt;
&lt;p&gt;Just a quick note: it seems that the new Dell RAID controllers are not
supported by the current stable version of Debian: Lenny. The S300, H200 and
H700 controllers as supplied with the R310, R410 and 'higher' systems, are
thus not a great choice if you want to keep things 'stock'. You may have to
run backported kernels and installer images and I couldn't figure out if these
controllers actually work with these latest images.&lt;/p&gt;
&lt;p&gt;The Perc 6/i controller is supported by Debian. It seems that it can only be
supplied with the  R410 and higher systems. If you have any additional
information, like actual experience with these controllers, please report
this.&lt;/p&gt;
&lt;p&gt;I installed VMware ESX 4.1 and installed DEbian Lenny om top of that without
problems.&lt;/p&gt;
&lt;p&gt;Please note that the network cards of the R410 are still &lt;a href="https://louwrentius.blogspot.com/2010/08/debian-lenny-and-dell-r410"&gt;not supported&lt;/a&gt;
and will cause a headache!&lt;/p&gt;
&lt;p&gt;-network-card.html&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Lustre and the risk of Serious Data Loss</title><link href="https://louwrentius.com/lustre-and-the-risk-of-serious-data-loss.html" rel="alternate"></link><published>2010-07-03T22:53:00+02:00</published><updated>2010-07-03T22:53:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-07-03:/lustre-and-the-risk-of-serious-data-loss.html</id><summary type="html">&lt;p&gt;Personally I have a weakness for big-ass storage. Say 'petabyte' and I'm
interested. So I was thinking about how you would setup a large, scalable
storage infrastructure. How should such a thing work?&lt;/p&gt;
&lt;p&gt;Very simple: you should be able just to add hosts with some bad-ass huge RAID
arrays attached …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Personally I have a weakness for big-ass storage. Say 'petabyte' and I'm
interested. So I was thinking about how you would setup a large, scalable
storage infrastructure. How should such a thing work?&lt;/p&gt;
&lt;p&gt;Very simple: you should be able just to add hosts with some bad-ass huge RAID
arrays attached to them. Maybe even not that huge, say 8 TB RAID 6 arrays or
maybe bigger. You use these systems as building blocks to create a single and
very large storage space. And then there is one additional requirement: as the
number of these building blocks increase, you must be able to loose some and
not loose data or availability. You should be able to continue operations
without one or two of those storage building blocks before you would loose
data and/or availability. Like RAID 5 or 6 but then over server systems
instead of hard drives.&lt;/p&gt;
&lt;p&gt;The hard part is in connecting all this separate storage to one virtual
environment. A solution to this problem is Lustre.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://wiki.lustre.org/index.php/Main_Page"&gt;Lustre&lt;/a&gt; is a network clustering filesystem. What does that mean? You can
use Lustre to create a scalable storage platform. A single filesystem that can
grow to multiple Petabytes. Lustre is deployed within production environments
at large scale sites involving some of the fastest and largest computer
clusters. Luster is thus something to take seriously.&lt;/p&gt;
&lt;p&gt;Lustre stores all metadata about files on a separate MetaDataServer (MDS). Al
actual file data is stored on Object Storage Targets (OSTs). These are just
machines with one or more big RAID arrays (or simple disks) attached to them.
The OSTs are not directly accessible by clients, but through an Object Storage
Server (OSS). The data stored within a file can be striped over multiple OSTs
for performance reasons.  A sort of network RAID 0.&lt;/p&gt;
&lt;p&gt;Lustre does not only allow scaling up to Petabytes of storage, but allows also
a parallel file transfer performance in excess of 100 GB/s. How you like them
apples? That is just wicked sick.&lt;/p&gt;
&lt;p&gt;Just take a look at this diagram about how Lustre operates:&lt;/p&gt;
&lt;p&gt;&lt;a href="/static/images/lustre-schema.jpg"&gt;&lt;img alt="lustre schema" src="/static/images/lustre-schema-small.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I'm not going into the details about Lustre. I want to discuss a shortcoming
that may pose a serious risk of data loss: &lt;strong&gt;&lt;em&gt;if you loose a single OST with
any attached storage, you will lose all data stored on that OST&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Lustre cannot cope with the loss of a single OST! Even if you buy fully
redundant hardware, with double RAID controllers, ECC memory, double PSU, etc,
even then, if the motherboard gets fried, you will loose data. Surely not
everything, but let's say 'just' 8 TB maybe?&lt;/p&gt;
&lt;p&gt;I guess the risk is assumed to be low, because of the wide scale deployment of
Lustre. Deployed by people who actually use it and have way more experience
and knowledge than me about this whole stuff. So maybe I'm pointing out risks
that are just very small. But I have seen server systems fail this bad as
described. I don't think the risk, especially at this scale, is not that
small.&lt;/p&gt;
&lt;p&gt;I am certainly &lt;a href="http://www.mail-archive.com/lustre-discuss@clusterfs.com/msg01048.html"&gt;not the first&lt;/a&gt; to &lt;a href="http://comments.gmane.org/gmane.comp.file-systems.lustre.user/9501"&gt;point out this risk&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The solution for Lustre to became truly awesome is to implement some kind of
network based RAID 6 striping so you could loose one or even two OSTs and not
have any impact on availability except maybe for performance. But it doesn't
(yet).&lt;/p&gt;
&lt;p&gt;This implies that you have to create your OSTs super-reliable, which would be
very expensive (does not scale). Or have some very high-capacity backup
solution, which would be able to restore some data. But you would have
downtime.&lt;/p&gt;
&lt;p&gt;So my question to you is: is there an actual scalable filesystem as Lustre
that actually is capable of withstanding the failure of a single storage
building block? If you have something to point out, please do.&lt;/p&gt;
&lt;p&gt;BTW: please note that the loss of an OSS can be overcome because another OSS
can take over the OSTs of a failed OSS.&lt;/p&gt;</content><category term="Storage"></category><category term="lustre"></category><category term="ost"></category><category term="failure"></category><category term="data"></category><category term="loss"></category></entry><entry><title>Secure caching DNS server on Linux with DJBDNS</title><link href="https://louwrentius.com/secure-caching-dns-server-on-linux-with-djbdns.html" rel="alternate"></link><published>2010-06-12T16:37:00+02:00</published><updated>2010-06-12T16:37:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-06-12:/secure-caching-dns-server-on-linux-with-djbdns.html</id><summary type="html">&lt;p&gt;The most commonly used DNS server software is ISC BIND, the "Berkeley Internet
Name Daemon". However, this software has a bad security track record and is in
my opinion a pain to configure.&lt;/p&gt;
&lt;p&gt;Mr. D.J. Bernstein developed "djbdns", which comes with a guarantee: if anyone
finds a security vulnerability …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The most commonly used DNS server software is ISC BIND, the "Berkeley Internet
Name Daemon". However, this software has a bad security track record and is in
my opinion a pain to configure.&lt;/p&gt;
&lt;p&gt;Mr. D.J. Bernstein developed "djbdns", which comes with a guarantee: if anyone
finds a security vulnerability within djbdns, you will get &lt;a href="http://cr.yp.to/djbdns/guarantee.html"&gt;one thousand
dollars&lt;/a&gt;. This price has been claimed &lt;a href="http://article.gmane.org/gmane.network.djbdns/13864"&gt;once&lt;/a&gt;. But djbdns has a far
better track record than BIND.&lt;/p&gt;
&lt;p&gt;Well, attaching your own name to your DNS implementation and tying  a price to
it if someone finds a vulnerability in it, does show some confidence. But
there is more to it. D.J. Bernstein already pointed out some important
security risks regarding DNS and made djbdns immune against them, even before
it became &lt;a href="http://en.wikipedia.org/wiki/DNS_cache_poisoning"&gt;a serious world-wide security issue&lt;/a&gt;. However, djbdns is to this
day vulnerable to a variant of this type of attack and the dbndns package is
as of 2010 &lt;a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=516394"&gt;still not patched&lt;/a&gt;. Although the risk is small, you must be
aware of this. I still think that djbdns is less of a security risk,
especially regarding buffer overflows, but it is up to you to decide which
risk you want to take.&lt;/p&gt;
&lt;p&gt;The nice thing about djbdns is that it consists of several separate programs,
that each perform a dedicated task. This is in stark contrast with BIND, which
is one single program that performs all DNS functionality. One can argue that
djbdns is far more simpler and &lt;a href="http://cr.yp.to/djbdns/blurb/easeofuse.html"&gt;easy to use&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So this post is about setting up djbdns on a Debian Linux host as a forwarding
server, thus a 'DNS cache'. This is often used to speed up DNS queries.
Clients do not have to connect to the DNS server of your ISP but can use your
local DNS server. This server will also cache the results of queries, so it
will reduce the number of DNS queries that will be sent out to your ISP DNS
server or the Internet.&lt;/p&gt;
&lt;p&gt;Debian Lenny has a patched version of djbdns in its repository. The applied
patch adds IPV6 support to djbdns. This is how you install it:&lt;/p&gt;
&lt;p&gt;apt-get install dbndns&lt;/p&gt;
&lt;p&gt;The dbndns package is actually a fork of the original djbdns software. Now the
program we need to configure is called 'dnscache', which only does one thing:
performing recursive DNS queries. This is exactly what we want.&lt;/p&gt;
&lt;p&gt;To keep things secure, the djbdns software must not be run with superuser
(root) privileges, so two accounts must be made: one for the service, and one
for logging.&lt;/p&gt;
&lt;p&gt;groupadd dnscache&lt;/p&gt;
&lt;p&gt;useradd -g dnscache dnscache&lt;/p&gt;
&lt;p&gt;useradd -g dnscache dnscachelog&lt;/p&gt;
&lt;p&gt;The next step is to configure the dnscache software like this:&lt;/p&gt;
&lt;p&gt;dnscache-conf dnscache dnscachelog /etc/dnscache 192.168.0.10&lt;/p&gt;
&lt;p&gt;The first two options tell dnscache which system user accounts to use for this
service. The /etc/dnscache directory stores the dnscache configuration. The
last option specifies which IP address to listen on. If you don't specify an
IP address, localhost (127.0.0.1) is used. If you want to run a forwarding DNS
server for your local network, you need to make dnscache listen on the IP
address on your local network, as in the example.&lt;/p&gt;
&lt;p&gt;Djbdns relies on daemontools and in order to be started by daemontools we need
to perform one last step:&lt;/p&gt;
&lt;p&gt;ln -s /etc/dnscace /etc/service/&lt;/p&gt;
&lt;p&gt;Within a couple of seconds, the dnscache software will be started by the
daemontools software. You can check it out like this:&lt;/p&gt;
&lt;p&gt;svstat /etc/service/dnscache&lt;/p&gt;
&lt;p&gt;A positive result will look like this:&lt;/p&gt;
&lt;p&gt;/etc/service/dnscache: up (pid 6560) 159 seconds&lt;/p&gt;
&lt;p&gt;However, the cache cannot be used just yet. Dnscache is governed by some text-
based configuration files in the /etc/dnscache directory. For example, the
./env/IP file contains the IP address that we configured previously on which
the service will listen.&lt;/p&gt;
&lt;p&gt;By default, only localhost will be able to access the dnscache. To allow
access to all clients on the local network you have to create a file with the
name of the network in ./root/ip/. If your network is 192.168.0.0/24 (thus 254
hosts), create a file named 192.168.0:&lt;/p&gt;
&lt;p&gt;Mini:/etc/dnscache/root/ip# pwd&lt;/p&gt;
&lt;p&gt;/etc/dnscache/root/ip&lt;/p&gt;
&lt;p&gt;Mini:/etc/dnscache/root/ip# ls&lt;/p&gt;
&lt;p&gt;192.168.0&lt;/p&gt;
&lt;p&gt;Now clients will be able to use the dnscache. Now you are running a simple
forwarding DNS server and it probably took you under ten minutes to configure
it. Although djbdns is &lt;a href="http://security-tracker.debian.org/tracker/CVE-2008-4392"&gt;not very well maintained&lt;/a&gt; in Debian Lenny, there is
currently not a really good alternative for BIND. PowerDNS is &lt;a href="http://lwn.net/Articles/368833/"&gt;not very
secure&lt;/a&gt;  (buffer overflows) and djbdns / dbndns has in more than 10 years
never been affected by this type of vulnerability.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="linux"></category><category term="djbdns"></category><category term="caching"></category><category term="forwarding"></category><category term="dns"></category><category term="server"></category><category term="bernstein"></category></entry><entry><title>'Linux RAID level and chunk size: the benchmarks'</title><link href="https://louwrentius.com/linux-raid-level-and-chunk-size-the-benchmarks.html" rel="alternate"></link><published>2010-05-23T19:11:00+02:00</published><updated>2010-05-23T19:11:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-05-23:/linux-raid-level-and-chunk-size-the-benchmarks.html</id><summary type="html">&lt;p&gt;Introduction&lt;/p&gt;
&lt;p&gt;When configuring a Linux RAID array, the chunk size needs to get chosen. But
what is the chunk size?&lt;/p&gt;
&lt;p&gt;When you write data to a RAID array that implements striping (level 0, 5, 6,
10 and so on), the chunk of data sent to the array is broken down …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Introduction&lt;/p&gt;
&lt;p&gt;When configuring a Linux RAID array, the chunk size needs to get chosen. But
what is the chunk size?&lt;/p&gt;
&lt;p&gt;When you write data to a RAID array that implements striping (level 0, 5, 6,
10 and so on), the chunk of data sent to the array is broken down in to
pieces, each part written to a single drive in the array. This is how striping
improves performance. The data is written in parallel to the drive.&lt;/p&gt;
&lt;p&gt;The chunk size determines how large such a piece will be for a single drive.
For example: if you choose a chunk size of 64 KB, a 256 KB file will use four
chunks. Assuming that you have setup a 4 drive RAID 0 array, the four chunks
are each written to a separate drive, exactly what we want.&lt;/p&gt;
&lt;p&gt;This also makes clear that when choosing the wrong chunk size, performance may
suffer. If the chunk size would be 256 KB, the file would be written to a
single drive, thus the RAID striping wouldn't provide any benefit, unless
manny of such files would be written to the array, in which case the different
drives would handle different files.&lt;/p&gt;
&lt;p&gt;In this article, I will provide some benchmarks that focus on sequential read
and write performance. Thus, these benchmarks won't be of much importance if
the array must sustain a random IO workload and needs high random iops.&lt;/p&gt;
&lt;h3&gt;Test setup&lt;/h3&gt;
&lt;p&gt;All benchmarks are performed with a consumer grade system consisting of these
parts:&lt;/p&gt;
&lt;p&gt;Processor: AMD Athlon X2 BE-2300, running at 1.9 GHz.&lt;/p&gt;
&lt;p&gt;RAM: 2 GB&lt;/p&gt;
&lt;p&gt;Disks: SAMSUNG HD501LJ (500GB, 7200 RPM)&lt;/p&gt;
&lt;p&gt;SATA controller: Highpoint RocketRaid 2320 (non-raid mode)&lt;/p&gt;
&lt;p&gt;Tests are performed with an array of 4 and an array of 6 drives.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;All drives are attached to the Highpoint controller. The controller is not
used for RAID, only to supply sufficient SATA ports. Linux software RAID with
mdadm is used.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A single drive provides a read speed of 85 MB/s and a write speed of 88
MB/s&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The RAID levels 0, 5, 6 and 10 are tested.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Chunk sizes starting from 4K to 1024K are tested.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;XFS is used as the test file system.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Data is read from/written to a 10 GB file.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The theoretical max through put of a 4 drive array is 340 MB/s. A 6 drive
array should be able to sustain 510 MB/s.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;About the data:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;All tests have been performed by a Bash shell script that accumulated all
data, there was no human intervention when acquiring data.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;All values are based on the average of five runs. After each run, the RAID
array is destroyed, re-created and formatted.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;For every RAID level + chunk size, five tests are performed and averaged.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Data transfer speed is measured using the 'dd' utility with the option
bs=1M.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Test results&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;Results of the tests performed with four drives:&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="/static/images/4drives.png"&gt;&lt;img alt="" src="/static/images/4drives.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Test results with six drives:&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="/static/images/6drives.png"&gt;&lt;img alt="" src="/static/images/6drives.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Analysis and conclusion&lt;/h3&gt;
&lt;p&gt;Based on the test results, several observations can be made. The first one is
that RAID levels with parity, such as RAID 5 and 6, seem to favor a smaller
chunk size of 64 KB.&lt;/p&gt;
&lt;p&gt;The RAID levels that only perform striping, such as RAID 0 and 10, prefer a
larger chunk size, with an optimum of 256 KB or even 512 KB.&lt;/p&gt;
&lt;p&gt;It is also noteworthy that RAID 5 and RAID 6 performance don't differ that
much.&lt;/p&gt;
&lt;p&gt;Furthermore, the theoretical transfer rates that should be achieved based on
the performance of a single drive, are not met. The cause is unknown to me,
but overhead and the relatively weak CPU may have a part in this. Also, the
XFS file system may play a role in this. Overall, it seems that on this
system, software RAID does not seem to scale well. Since my big storage
monster (as seen on the left) is able to perform way better, I suspect that it
is a hardware issue.&lt;/p&gt;
&lt;p&gt;because the M2A-VM consumer-grade motherboard can't go any faster.&lt;/p&gt;</content><category term="Storage"></category><category term="linux"></category><category term="mdadm"></category><category term="raid"></category><category term="chunk"></category><category term="size"></category><category term="benchmark"></category><category term="performance"></category></entry><entry><title>'Tool of the month: iftop - advanced bandwidth monitoring'</title><link href="https://louwrentius.com/tool-of-the-month-iftop-advanced-bandwidth-monitoring.html" rel="alternate"></link><published>2010-04-20T20:56:00+02:00</published><updated>2010-04-20T20:56:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-04-20:/tool-of-the-month-iftop-advanced-bandwidth-monitoring.html</id><summary type="html">&lt;p&gt;The utility &lt;a href="http://www.ex-parrot.com/pdw/iftop/"&gt;iftop&lt;/a&gt; allows you to monitor bandwidth usage. It is in some
sense similar to tools like iptraf, dstat and bwm-ng. Iftop is more special
than those. Because iftop lets you monitor the speed of individual TCP / UDP
connections. Basically, you will be able to determine how much traffic …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The utility &lt;a href="http://www.ex-parrot.com/pdw/iftop/"&gt;iftop&lt;/a&gt; allows you to monitor bandwidth usage. It is in some
sense similar to tools like iptraf, dstat and bwm-ng. Iftop is more special
than those. Because iftop lets you monitor the speed of individual TCP / UDP
connections. Basically, you will be able to determine how much traffic is
flowing between two hosts.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.ex-parrot.com/pdw/iftop/"&gt;It is definitely worth a try.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="iftop" src="https://louwrentius.com/static/images/iftop_normal.png" /&gt;&lt;/p&gt;</content><category term="Networking"></category><category term="Uncategorized"></category></entry><entry><title>WFS - WAN Failover Script now available</title><link href="https://louwrentius.com/wfs-wan-failover-script-now-available.html" rel="alternate"></link><published>2010-04-10T18:05:00+02:00</published><updated>2010-04-10T18:05:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-04-10:/wfs-wan-failover-script-now-available.html</id><summary type="html">&lt;p&gt;Since I could not find a WAN failover script for Linux to my likening, I wrote
one myself. If you have any use for it: I put it on a Google code project.&lt;/p&gt;
&lt;p&gt;WFS tests the availability of your primary WAN connection and switches to your
secondary / backup connection when …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Since I could not find a WAN failover script for Linux to my likening, I wrote
one myself. If you have any use for it: I put it on a Google code project.&lt;/p&gt;
&lt;p&gt;WFS tests the availability of your primary WAN connection and switches to your
secondary / backup connection when a failure is detected. When in failover
mode, WFS continues to monitor the availability of the primary WAN connection
and once it becomes available again, it switches back.&lt;/p&gt;
&lt;p&gt;For more information and a download, please take a look at the project:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://code.google.com/p/wanfailoverscript/"&gt;http://code.google.com/p/wanfailoverscript/&lt;/a&gt;&lt;/p&gt;</content><category term="Networking"></category><category term="Uncategorized"></category></entry><entry><title>HP Procurve "auto DoS" feature causing network problems</title><link href="https://louwrentius.com/hp-procurve-auto-dos-feature-causing-network-problems.html" rel="alternate"></link><published>2010-04-07T18:51:00+02:00</published><updated>2010-04-07T18:51:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-04-07:/hp-procurve-auto-dos-feature-causing-network-problems.html</id><summary type="html">&lt;p&gt;A feature on more recent HP Procurve models (18xx series, such as 1810G etc.)
is called "Auto DoS". You can find it in the section "Security" and then
"Advanced security".&lt;/p&gt;
&lt;p&gt;If you enable the Auto DoS feature, traffic is blocked based on one of these
conditions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the source port (TCP …&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;</summary><content type="html">&lt;p&gt;A feature on more recent HP Procurve models (18xx series, such as 1810G etc.)
is called "Auto DoS". You can find it in the section "Security" and then
"Advanced security".&lt;/p&gt;
&lt;p&gt;If you enable the Auto DoS feature, traffic is blocked based on one of these
conditions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;the source port (TCP / UDP) is identical to the destination port (NTP,
SYSLOG, etc)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;the source port (TCP / UDP) is 'privileged' thus in the range of 1 -1023.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This will cause all kinds of problems, but first this: "Why on earth is a
Layer 2 device filtering on Layer 3?". This is just insane.&lt;/p&gt;
&lt;p&gt;NTP does not work any more. Syslog traffic will not arive. VPN traffic may not
arrive.&lt;/p&gt;
&lt;p&gt;This issue cost me a lot of time to solve. I first blamed our Firewall, but
the actual traffic arrived on the tagged trunk port on the affected switch.
The traffic somehow was not sent to the switch port on which the destination
device was connected.&lt;/p&gt;
&lt;p&gt;Affected products:&lt;/p&gt;
&lt;p&gt;HP ProCurve 1810G - J9449A ( 8 ports ) and J9450A ( 24 ports )&lt;/p&gt;</content><category term="Networking"></category><category term="Uncategorized"></category></entry><entry><title>RAID array size and rebuild speed</title><link href="https://louwrentius.com/raid-array-size-and-rebuild-speed.html" rel="alternate"></link><published>2010-04-03T23:02:00+02:00</published><updated>2010-04-03T23:02:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-04-03:/raid-array-size-and-rebuild-speed.html</id><summary type="html">&lt;p&gt;When a disk fails of a RAID 5 array, you are no longer protected against
(another) disk failure and thus data loss. During this rebuild time, you are
vulnerable. The longer it takes to rebuild your array, the longer you are
vulnerable. Especially during a disk-intensive period, because the array …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When a disk fails of a RAID 5 array, you are no longer protected against
(another) disk failure and thus data loss. During this rebuild time, you are
vulnerable. The longer it takes to rebuild your array, the longer you are
vulnerable. Especially during a disk-intensive period, because the array must
be reconstructed.&lt;/p&gt;
&lt;p&gt;When one disk fails of a RAID 6 array, you are still protected against data
loss because the array can take a second disk failure. So RAID 6 is almost
always the better choice. Especially with large disks (1+ TB), because the
rebuild time largely depends on the size of a single disk, not on the size of
the entire RAID array.&lt;/p&gt;
&lt;p&gt;However, there is one catch. The size of the RAID array matters when it
becomes big, 10+ drives or more. No matter if you use hardware- or software-
based RAID, the processor must read the contents of all drives simultaneously
and use that information to rebuild the replaced drive. When creating a large
RAID array, such as in my storage array, with 20 disks, the check and rebuild
of the array becomes CPU-bound.&lt;/p&gt;
&lt;p&gt;This is because the CPU must process 1,1 GB/s (as in gigabyte!) of data and
use that data stream to rebuild that single drive. Using 1 TB drives, it
checks or rebuilds the array at about 50 MB/s, which is less than half what
the drives are capable of (100+ MB/s). Top shows that indeed the CPU is almost
saturated (95%). Please note that a check or rebuild of my storage server
takes about 5 hours currently, but that could be way shorter if the CPU was
not saturated.&lt;/p&gt;
&lt;p&gt;My array is not for professional use and fast rebuild times are not that of an
issue. But if you're more serious about your setup, it may be advised to
create more smaller RAID vollumes and glue them together using LVM or some
similar solution.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>'Linux: monitor a directory for files'</title><link href="https://louwrentius.com/linux-monitor-a-directory-for-files.html" rel="alternate"></link><published>2010-03-22T17:01:00+01:00</published><updated>2010-03-22T17:01:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-03-22:/linux-monitor-a-directory-for-files.html</id><summary type="html">&lt;p&gt;Inotify is a mechanism in the Linux kernel that reports when a file system
event occurs.&lt;/p&gt;
&lt;p&gt;The inotifywait comand line utility can be used in shell scripts to monitor
directories for new files. It can also be used to monitor files for changes.
Inotifywait must be installed and is often …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Inotify is a mechanism in the Linux kernel that reports when a file system
event occurs.&lt;/p&gt;
&lt;p&gt;The inotifywait comand line utility can be used in shell scripts to monitor
directories for new files. It can also be used to monitor files for changes.
Inotifywait must be installed and is often not part of the base installation
of your Linux distro. However, if you need to monitor a directory or some task
like that, it is worth the effort. (apt-get install inotify-tools).&lt;/p&gt;
&lt;p&gt;Here is how you use it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;inotifywait -m -r -e close_write /tmp/ | while read LINE; do echo $LINE |
awk '{ print $1 $3 }'; done&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let's dissect this example one part at a time. The most interesting part is
this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;inotifywait -m -r -e close_write /tmp/&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;What happens here? First, inotifywait monitors the /tmp directory. The
monitoring mode is specified with the -m option, otherwise inotifywait would
exit after the first event. The -r option specifies recursion, beware of large
directory trees. The -e option is the most important part. You only want to be
notified of new files if they are complete. So only after a close_write event
should your script be notified of an event. A 'create' event for example,
should not cause your script to perform any action, because the file would not
be ready yet.&lt;/p&gt;
&lt;p&gt;The remaining part of the example is just to get output like this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;/tmp/test1234/blablaf&lt;/p&gt;
&lt;p&gt;/tmp/test123&lt;/p&gt;
&lt;p&gt;/tmp/random.bin&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This output can be used to use as an argument to other scripts or functions,
in order to perform some kind of action on this file.&lt;/p&gt;
&lt;p&gt;This mechanism is specific to Linux. So it is not a OS independent solution.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Linux"></category><category term="monitor"></category><category term="files"></category><category term="directories"></category><category term="asynchronous"></category></entry><entry><title>'Syslog: the hidden security risk'</title><link href="https://louwrentius.com/syslog-the-hidden-security-risk.html" rel="alternate"></link><published>2010-03-18T08:23:00+01:00</published><updated>2010-03-18T08:23:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-03-18:/syslog-the-hidden-security-risk.html</id><summary type="html">&lt;p&gt;People sometimes forget that there are also a number of UDP-based services
that may pose a threat to the security of your systems. SNMP is a well-known
service, notorious for being configured with a default password (or community
string).&lt;/p&gt;
&lt;p&gt;But there is another service that is often not seen as …&lt;/p&gt;</summary><content type="html">&lt;p&gt;People sometimes forget that there are also a number of UDP-based services
that may pose a threat to the security of your systems. SNMP is a well-known
service, notorious for being configured with a default password (or community
string).&lt;/p&gt;
&lt;p&gt;But there is another service that is often not seen as a risk. This is the
syslog service. Syslog is used on virtually all UNIX-like platform for logging
messages of the system to one or more log files to disk. The syslog service
often listens on the network, on UDP-port 514. Please note that syslog does
not perform any authentication of data that is sent to it.&lt;/p&gt;
&lt;p&gt;So what does this mean?&lt;/p&gt;
&lt;p&gt;An attacker can:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a denial-of-service condition (DoS) by sending large amounts of
data to the syslog service, filling up disk space.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the disk is full, logs can no longer be saved, thus any attack that
would leave a trail within the logs would go unnoticed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;by sending large amounts of specially crafted messages, an attacker can
cause chaos if logs are monitored by intrusion detection systems or other
systems that create alerts.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;How to attack? Just use netcat:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;nc -u [IP-address] 514&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Once you are connected, anything you type will be logged in a log file.&lt;/p&gt;
&lt;p&gt;How to mitigate this issue?&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Firewall access to UDP-port 514&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure that the syslog service does not listen on the network if not
required, only on localhost.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;</content><category term="Security"></category><category term="Uncategorized"></category></entry><entry><title>Ubuntu and full disk encryption (FDE)</title><link href="https://louwrentius.com/ubuntu-and-full-disk-encryption-fde.html" rel="alternate"></link><published>2010-02-22T22:50:00+01:00</published><updated>2010-02-22T22:50:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-02-22:/ubuntu-and-full-disk-encryption-fde.html</id><summary type="html">&lt;p&gt;Ubuntu is based on Debian Linux. As part of a regular Debian installation, you
can choose to create an encrypted disk volume based on LUKS. This is different
from the option within the Ubuntu installation to encrypt home directories. To
be able to install Ubuntu and use full disk encryption …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Ubuntu is based on Debian Linux. As part of a regular Debian installation, you
can choose to create an encrypted disk volume based on LUKS. This is different
from the option within the Ubuntu installation to encrypt home directories. To
be able to install Ubuntu and use full disk encryption, you need to download
the &lt;a href="http://www.ubuntu.com/getubuntu/downloadmirrors#alternate"&gt;alternate install CD / DVD&lt;/a&gt;. Only this version of Ubuntu supports LUKS
as an installation option.&lt;/p&gt;
&lt;p&gt;You will have either two options:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;use the default choice, creating a swap partition, boot partition and the
encrypted root file system on top of LVM;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;create separate crypted partitions yourself manualy.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Personaly I don't care for separate partitions and use the provided automatic
option. If you do care, &lt;a href="http://learninginlinux.com/2008/04/23/installing-ubuntu-804-with-full-"&gt;please read this blog for more info&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;disk-encryption/&lt;/p&gt;</content><category term="Security"></category><category term="Uncategorized"></category></entry><entry><title>Scanning many hosts in parallel with Nmap using PPSS</title><link href="https://louwrentius.com/scanning-many-hosts-in-parallel-with-nmap-using-ppss.html" rel="alternate"></link><published>2010-02-18T21:41:00+01:00</published><updated>2010-02-18T21:41:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-02-18:/scanning-many-hosts-in-parallel-with-nmap-using-ppss.html</id><summary type="html">&lt;p&gt;Scanning a large number of hosts using Nmap often takes a lot of time. During
this time, no output is written to a file or disk. Only when Nmap is finished,
is all output written to the output file. Often, I want to start processing
results of hosts that have …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Scanning a large number of hosts using Nmap often takes a lot of time. During
this time, no output is written to a file or disk. Only when Nmap is finished,
is all output written to the output file. Often, I want to start processing
results of hosts that have already been scanned. Often, the trick is to split
the input file with all the hosts and start multiple Nmap instances by hand
using the different input files. This is rather cumbersome. Now what I really
want is that I get the results of a scan of a particular host immediately
available as soon as it's finished. Here is where PPSS comes in. PPSS can
start Nmap scans and proces a list of hosts as contained in an input file.
PPSS will only start a predefined max number of simultaneous scans in
parallel, as not to overwhelm the scanner, network or target hosts. This is an
example on how PPSS can be used to obtain results immediately:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;./ppss -f hosts.txt -c 'nmap -n -v -sS -A -p- -oN "$ITEM" "$ITEM"' -p 4&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Where hosts.txt contains IP-addresses, networks or domain names like:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;192.168.0.1&lt;/p&gt;
&lt;p&gt;192.168.0.2&lt;/p&gt;
&lt;p&gt;192.168.0.3&lt;/p&gt;
&lt;p&gt;192.168.1.1-254&lt;/p&gt;
&lt;p&gt;www.google.nl&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The 'ITEM' part is the fun bit. In this example, multiple instances of Nmap
will scan a single hosts. The output is written to a file called "$ITEM",
which is of course substituted for the IP-address or domain name as read from
hosts.txt. The second "$ITEM" is the argument to Nmap which tells which host
to scan. The -p 4 option tells PPSS to run 4 nmap scans simultaneously at all
times.&lt;/p&gt;
&lt;p&gt;You will end up with a large number of output files, one per host. As soon as
a scan is finished on one host, you can start processing the results, instead
of waiting for that big scan to finish.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Corsair CM PSU-750HX seems ok</title><link href="https://louwrentius.com/corsair-cm-psu-750hx-seems-ok.html" rel="alternate"></link><published>2010-02-10T23:59:00+01:00</published><updated>2010-02-10T23:59:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-02-10:/corsair-cm-psu-750hx-seems-ok.html</id><summary type="html">&lt;p&gt;I had to replace my Coolermaster PSU and after some searching on the interweb,
I chose the Corsair CMPSU-750HX. One of the reasons is that Corsair states
that this PSU can withstand 50 degree Celcius in continuous operation on full
load.&lt;/p&gt;
&lt;p&gt;The package is very, clean, with all the cables …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I had to replace my Coolermaster PSU and after some searching on the interweb,
I chose the Corsair CMPSU-750HX. One of the reasons is that Corsair states
that this PSU can withstand 50 degree Celcius in continuous operation on full
load.&lt;/p&gt;
&lt;p&gt;The package is very, clean, with all the cables in some neat pouch and the PSU
itself also in a neat bag. You pay quite some money, but it at least suggest
quality. The modular design with the modular kables is excellent.&lt;/p&gt;
&lt;p&gt;Personally, I think that PSUs with multiple 12v rails are not that usefull.
This particular PSU has just one single 12v rail that can be loaded up to full
capacity of the PSU. I think this makes the design easier and if you are
running heavy systems that have short spikes (starting al drives) this is
ideal.&lt;/p&gt;
&lt;p&gt;I hope this one lasts longer.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Smoking coolermaster Silent Pro M 600W TN6M50</title><link href="https://louwrentius.com/smoking-coolermaster-silent-pro-m-600w-tn6m50.html" rel="alternate"></link><published>2010-02-10T00:46:00+01:00</published><updated>2010-02-10T00:46:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-02-10:/smoking-coolermaster-silent-pro-m-600w-tn6m50.html</id><summary type="html">&lt;p&gt;So I was just messing around on my work station, when suddenly I smelled the
smell any person familiar with electronics fears. The smell of some electrical
component burning.&lt;/p&gt;
&lt;p&gt;The whole upper floor smelled like something was smoldering. So I shut down
both my storage servers. After half an hour …&lt;/p&gt;</summary><content type="html">&lt;p&gt;So I was just messing around on my work station, when suddenly I smelled the
smell any person familiar with electronics fears. The smell of some electrical
component burning.&lt;/p&gt;
&lt;p&gt;The whole upper floor smelled like something was smoldering. So I shut down
both my storage servers. After half an hour I still hadn't found the source of
the smell, but it definitely was my 18 TB NAS.&lt;/p&gt;
&lt;p&gt;So I switched it back on to see if I could determine the cause. The smell
became stronger and within the strike light of my flash light, I saw small
strands of grey smoke slowly twirling out of the Coolermaster Silent Pro M
600W power supply.&lt;/p&gt;
&lt;p&gt;That was the moment I decided to turn the system off immediately. I know this
kind of thing can happen. But it is still very bad for such a component to
crap out on me like that.&lt;/p&gt;
&lt;p&gt;Needless to say that I don't want a replacement of this model in my server. I
will now try my luck with the Corsair CMPSU-750HX 750Watt PSU.&lt;/p&gt;
&lt;p&gt;Based on a single experience it cannot be concluded that this Coolermaster PSU
is a bad one that should be avoided. But the only moving part of a PSU is the
(big) fan. What must be wrong with it that it decided to start smoking?&lt;/p&gt;
&lt;p&gt;For reasons of availability, a redundant power supply should be used, but that
is just too expensive for my taste ($500+).&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Is the iPhone OS a threat to our freedom?</title><link href="https://louwrentius.com/is-the-iphone-os-a-threat-to-our-freedom.html" rel="alternate"></link><published>2010-01-31T19:55:00+01:00</published><updated>2010-01-31T19:55:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-01-31:/is-the-iphone-os-a-threat-to-our-freedom.html</id><summary type="html">&lt;p&gt;There is one fundamental flaw with both the iPhone and the iPad. As a user,
you do not have full control over your device. You can only install or run
software that is approved by Apple.&lt;/p&gt;
&lt;p&gt;This is something that is unprecedented. All major platforms, Windows, Linux,
Mac OS X …&lt;/p&gt;</summary><content type="html">&lt;p&gt;There is one fundamental flaw with both the iPhone and the iPad. As a user,
you do not have full control over your device. You can only install or run
software that is approved by Apple.&lt;/p&gt;
&lt;p&gt;This is something that is unprecedented. All major platforms, Windows, Linux,
Mac OS X do provide the user with full control over what programs can be run
or not.&lt;/p&gt;
&lt;p&gt;Is this a bad thing? Are you trading your freedom for some luxurious gadget
such as the iPhone or the iPad?&lt;/p&gt;
&lt;p&gt;I say no. People are over reacting.&lt;/p&gt;
&lt;p&gt;You would trade in your freedom and become 'owned' by Apple, if an iPhone or
iPad is your sole computing device.&lt;/p&gt;
&lt;p&gt;However, everybody also has a computer at home, that is still under your full
control.Since the iPhone and iPad is just an auxiliary device (although quite
powerful), you are still free.&lt;/p&gt;
&lt;p&gt;Most importantly, I think it is about the content, not the device. Do you
retain control over your own content, such as music, files and e-mail? I think
so. I think that on this level, which is most crucial, Apple does not restrict
people on iPhones or iPads in any way. And that is what matters most I think.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="iphone"></category><category term="ipad"></category><category term="freedom"></category></entry><entry><title>Why the iPad will be a breakthrough in human computing</title><link href="https://louwrentius.com/why-the-ipad-will-be-a-breakthrough-in-human-computing.html" rel="alternate"></link><published>2010-01-30T10:50:00+01:00</published><updated>2010-01-30T10:50:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-01-30:/why-the-ipad-will-be-a-breakthrough-in-human-computing.html</id><summary type="html">&lt;p&gt;The fact is that computers aren't made for humans. Computers are just made,
and humans have to adjust to them.&lt;/p&gt;
&lt;p&gt;The problem is that most people that are not into technology just don't
understand how computers work. Should I single click of double click? Click
left or right?&lt;/p&gt;
&lt;p&gt;The only …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The fact is that computers aren't made for humans. Computers are just made,
and humans have to adjust to them.&lt;/p&gt;
&lt;p&gt;The problem is that most people that are not into technology just don't
understand how computers work. Should I single click of double click? Click
left or right?&lt;/p&gt;
&lt;p&gt;The only people that can work with some ease on computers are people with a
more than average interest in technology. Ordinary people often have to fight
great battles to get the job done. The therm 'ordinary' does not justice to
these people, because technologists seem to forget that &lt;a href="http://speirs.org/blog/2010/1/29/future-shock.html"&gt;they are the people
that actually do something useful.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The iPad is the first personal computer that is actually made for humans that
do not have a primary interest in computers. The entire computer is abstracted
away to focus on the things you really want to do. People do not need the help
of technologists anymore to do the things they want.&lt;/p&gt;
&lt;p&gt;The iPhone was the first in this new kind of computers, but due to it's size
it's still a bit of a gimmick. But it was a revolution within the world of
smart phones. It was the first device that was actually usable for non-
technologists. And the iPad will be the first 'regular' computer that
(hopefully) won't be such a nuisance as current home computers are.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>The iPad will be the death of Flash</title><link href="https://louwrentius.com/the-ipad-will-be-the-death-of-flash.html" rel="alternate"></link><published>2010-01-30T10:16:00+01:00</published><updated>2010-01-30T10:16:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-01-30:/the-ipad-will-be-the-death-of-flash.html</id><summary type="html">&lt;p&gt;So Apple finally released their tablet computer: the iPad. One of the most
debated drawbacks is that it lacks support for Adobe Flash. The iPhone does
not support Flash either, and since the iPad is based on the iPhone OS, this
should not come as a surprise.&lt;/p&gt;
&lt;p&gt;Now many people …&lt;/p&gt;</summary><content type="html">&lt;p&gt;So Apple finally released their tablet computer: the iPad. One of the most
debated drawbacks is that it lacks support for Adobe Flash. The iPhone does
not support Flash either, and since the iPad is based on the iPhone OS, this
should not come as a surprise.&lt;/p&gt;
&lt;p&gt;Now many people see this as a serious problem. &lt;a href="http://www.kungfugrippe.com/post/360209059/the-flash-blog-the-ipad-"&gt;There are a slew of websites
that cannot be viewed on an iPad for this reason&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;However, the thing is this: Flash sucks. It is a proprietary technology under
the control of Adobe and the only reason it is currently popular is that it
provides a kind of standard for video playback across all browsers and
operating systems. But it still sucks. It crashes. It is shit.&lt;/p&gt;
&lt;p&gt;The iPhone is immensely popular, despite the lack of flash. The iPhone user
base is huge. By using Flash in your website, you are excluding a large
population of potential visitors. So this is what will happen: websites will
start dumping Flash. The iPhone and the iPad will together kill Flash.&lt;/p&gt;
&lt;p&gt;And that is a good thing for everybody, especially for proponents of open
standards and open formats. Oh irony, that it will take a closed proprietary
platform to do so.&lt;/p&gt;
&lt;p&gt;provides-the-ultimate&lt;/p&gt;</content><category term="Uncategorized"></category><category term="ipad"></category><category term="iphone"></category><category term="kill"></category><category term="flash"></category></entry><entry><title>Mounting a file system or partition from a disk image</title><link href="https://louwrentius.com/mounting-a-file-system-or-partition-from-a-disk-image.html" rel="alternate"></link><published>2010-01-23T10:13:00+01:00</published><updated>2010-01-23T10:13:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-01-23:/mounting-a-file-system-or-partition-from-a-disk-image.html</id><summary type="html">&lt;p&gt;You cannot just make a disk copy with dd and then just mount it as a regular
disk. You must know where the partition starts on the disk. So first, you need
to get the partition table with sfdisk:&lt;/p&gt;
&lt;p&gt;sfdisk -l -uS image_file.dd&lt;/p&gt;
&lt;p&gt;The output is something like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Disk …&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</summary><content type="html">&lt;p&gt;You cannot just make a disk copy with dd and then just mount it as a regular
disk. You must know where the partition starts on the disk. So first, you need
to get the partition table with sfdisk:&lt;/p&gt;
&lt;p&gt;sfdisk -l -uS image_file.dd&lt;/p&gt;
&lt;p&gt;The output is something like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;Disk /mnt/image/image_file.dd: 9729 cylinders, 255 heads, 63 sectors/track
Warning: The partition table looks like it was made
for C/H/S=*/240/63 (instead of 9729/255/63).

For this listing I&amp;#39;ll assume that geometry.

Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System

/mnt/image/simon-besmet.img1 63 8497439 8497377 b W95 FAT32
/mnt/image/simon-besmet.img2 * 8497440 156280319 147782880 7 HPFS/NTFS
/mnt/image/simon-besmet.img3 0 - 0 0 Empty
/mnt/image/simon-besmet.img4 0 - 0 0 Empty
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Next, we need to calculate the actual starting point of the partition. The
sector number is in sectors, which contain 512 bytes. So in this case, the
starting point of the NTFS partition is 8497440 x 512 = 4350689280.&lt;/p&gt;
&lt;p&gt;To mount the image, enter the following command, using the calculated offset:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;mount -o loop,offset=4350689280 /mnt/image/disk_image.dd /mnt/disk
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Source:&lt;/p&gt;
&lt;p&gt;http://lists.samba.org/archive/linux/2005-April/013444.html&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>'Linux: show graphical layout of disk temperatures'</title><link href="https://louwrentius.com/linux-show-graphical-layout-of-disk-temperatures.html" rel="alternate"></link><published>2010-01-03T03:12:00+01:00</published><updated>2010-01-03T03:12:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2010-01-03:/linux-show-graphical-layout-of-disk-temperatures.html</id><summary type="html">&lt;p&gt;graphic, representation&lt;/p&gt;
&lt;p&gt;To get a visual representation of hard drive temperatures, I wrote a small
script. The output of this script looks like this:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://members.multiweb.nl/nan1/img/drivetempcolour.jpg"&gt;&lt;img alt="" src="http://members.multiweb.nl/nan1/img/drivetempcolour.jpg" /&gt;&lt;/a&gt;This output is tailored to the exact disk lay-out of my storage
server. However, it is also usable for other servers. You have to edit the …&lt;/p&gt;</summary><content type="html">&lt;p&gt;graphic, representation&lt;/p&gt;
&lt;p&gt;To get a visual representation of hard drive temperatures, I wrote a small
script. The output of this script looks like this:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://members.multiweb.nl/nan1/img/drivetempcolour.jpg"&gt;&lt;img alt="" src="http://members.multiweb.nl/nan1/img/drivetempcolour.jpg" /&gt;&lt;/a&gt;This output is tailored to the exact disk lay-out of my storage
server. However, it is also usable for other servers. You have to edit the
lay-out depending on the system.&lt;/p&gt;
&lt;p&gt;This script assumes that you have smartmontools installed on your system. It
uses smartctl to obtain the drive temperatures.&lt;/p&gt;
&lt;p&gt;The script can be found &lt;a href="http://members.multiweb.nl/nan1/got/show-hdd-temp.sh"&gt;here&lt;/a&gt; or &lt;a href="http://members.multiweb.nl/nan1/got/show-hdd-temp.tgz"&gt;here in tgz format&lt;/a&gt;.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="linux"></category><category term="hard"></category><category term="disk"></category><category term="drive"></category><category term="temperature"></category><category term="smartmontools"></category><category term="script"></category><category term="colour"></category><category term="layout"></category></entry><entry><title>Recovering a lost partition using gpart</title><link href="https://louwrentius.com/recovering-a-lost-partition-using-gpart.html" rel="alternate"></link><published>2009-12-22T19:12:00+01:00</published><updated>2009-12-22T19:12:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-12-22:/recovering-a-lost-partition-using-gpart.html</id><summary type="html">&lt;p&gt;Even today people do not understand how important it is for the safety of your
data to make backups. I was asked to perform some data recovery on a hard
drive of an old computer, which still contained important documents and
photo's.&lt;/p&gt;
&lt;p&gt;The first thing I did was to make …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Even today people do not understand how important it is for the safety of your
data to make backups. I was asked to perform some data recovery on a hard
drive of an old computer, which still contained important documents and
photo's.&lt;/p&gt;
&lt;p&gt;The first thing I did was to make a disk image with ddrescue. I always work
with the image and not with the original drive, to prevent any risk of
accidentally messing things up for good.&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;p&gt;ddrescue -r 2 -v if=/dev/sdf of=/storage/image/diskofperson.dd bs=1M&lt;/p&gt;
&lt;p&gt;Next, I tried using gparted on this file but got this error:&lt;/p&gt;
&lt;p&gt;Welcome to GNU Parted! Type 'help' to view a list of commands.&lt;/p&gt;
&lt;p&gt;(parted) p&lt;/p&gt;
&lt;p&gt;Error: /storage/image/diskofperson.dd: unrecognised disk label&lt;/p&gt;
&lt;p&gt;(parted) quit&lt;/p&gt;
&lt;p&gt;Also fdisk -l didn't work:&lt;/p&gt;
&lt;p&gt;Disk /storage/image/diskofperson.dd doesn't contain a valid partition table&lt;/p&gt;
&lt;p&gt;It seemed that the partition table was gone. I used the utility testdisk to
recover this partition, to no avail. Why this tool didn't work is beyond me.&lt;/p&gt;
&lt;p&gt;I found a very old utility called 'gpart' that just searches a disk for
existing partitions. I just want to know the starting offset of the relevant
partition.&lt;/p&gt;
&lt;p&gt;So I ran:&lt;/p&gt;
&lt;p&gt;gpart -g /storage/image/diskofperson.dd&lt;/p&gt;
&lt;p&gt;And I got nothing useful, although a partition was found:&lt;/p&gt;
&lt;p&gt;Begin scan...&lt;/p&gt;
&lt;p&gt;Possible partition(DOS FAT), size(57255mb), offset(0mb)&lt;/p&gt;
&lt;p&gt;End scan.&lt;/p&gt;
&lt;p&gt;So I ran the command again with more verbosity:&lt;/p&gt;
&lt;p&gt;gpart -g /storage/image/diskofperson.dd&lt;/p&gt;
&lt;p&gt;...&lt;/p&gt;
&lt;p&gt;Begin scan...&lt;/p&gt;
&lt;p&gt;Possible partition(DOS FAT), size(57255mb), offset(0mb)&lt;/p&gt;
&lt;p&gt;type: 011(0x0B)(DOS or Windows 95 with 32 bit FAT)&lt;/p&gt;
&lt;p&gt;size: 57255mb #s(117258372) s(63-117258434)&lt;/p&gt;
&lt;p&gt;chs: (1023/255/0)-(1023/255/0)d (0/0/0)-(0/0/0)r&lt;/p&gt;
&lt;p&gt;hex: 00 FF C0 FF 0B FF C0 FF 3F 00 00 00 84 38 FD 06&lt;/p&gt;
&lt;p&gt;End scan.&lt;/p&gt;
&lt;p&gt;...&lt;/p&gt;
&lt;p&gt;This time I got something useful. The s(63-117258434) part shows the starting
sector, which is 63. A sector is 512 bytes, so the exact starting offset of
the partition is 32256.&lt;/p&gt;
&lt;p&gt;So to mount this partition, just issue:&lt;/p&gt;
&lt;p&gt;mount -o loop,ro,offset=32256 /storage/image/diskofperson.dd /mnt/recovery&lt;/p&gt;
&lt;p&gt;And voilá, access to the filesystem has been obtained.&lt;/p&gt;
&lt;p&gt;/storage/image/jdiskofperson.dd on /mnt/recovery type vfat
(ro,loop=/dev/loop0,offset=32256)&lt;/p&gt;</content><category term="Uncategorized"></category><category term="data"></category><category term="recovery"></category><category term="gpart"></category><category term="linux"></category><category term="partition"></category><category term="search"></category></entry><entry><title>Wake On Lan not working with realtek r8168 card</title><link href="https://louwrentius.com/wake-on-lan-not-working-with-realtek-r8168-card.html" rel="alternate"></link><published>2009-12-20T15:18:00+01:00</published><updated>2009-12-20T15:18:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-12-20:/wake-on-lan-not-working-with-realtek-r8168-card.html</id><summary type="html">&lt;p&gt;After messing around with different kernels on Debian Lenny, I noticed that my
system did no longer respond to WOL packets. The system is using an on-board
Realtek R8168 chip which supports wake on lan.&lt;/p&gt;
&lt;p&gt;After searching and reading i found the problem. The system is using an r8168
chip …&lt;/p&gt;</summary><content type="html">&lt;p&gt;After messing around with different kernels on Debian Lenny, I noticed that my
system did no longer respond to WOL packets. The system is using an on-board
Realtek R8168 chip which supports wake on lan.&lt;/p&gt;
&lt;p&gt;After searching and reading i found the problem. The system is using an r8168
chip, but the driver that is loaded by Linux is the r8169 driver. If the r8168
driver is loaded, as it should be, then WOL is working again.&lt;/p&gt;
&lt;p&gt;See also &lt;a href="https://bugs.launchpad.net/linux/+bug/160413"&gt;this information&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The trick is to blacklist the r8169 module. To do so, edit the
/etc/moprobe.d/blacklist file and add a line reading 'blacklist r8169'&lt;/p&gt;
&lt;p&gt;To make this permanent, issue the following commands as root or with sudo:&lt;/p&gt;
&lt;p&gt;depmod -a&lt;/p&gt;
&lt;p&gt;mkinitramfs -o /boot/initrd.img-$(uname -r) $(uname -r)&lt;/p&gt;
&lt;p&gt;After a reboot, wake on lan should be working again.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="debian"></category><category term="wake"></category><category term="on"></category><category term="lan"></category><category term="wol"></category><category term="realtek"></category><category term="r8168"></category><category term="r8169"></category></entry><entry><title>Shunit2, unit testing for shell scripts</title><link href="https://louwrentius.com/shunit2-unit-testing-for-shell-scripts.html" rel="alternate"></link><published>2009-12-17T16:47:00+01:00</published><updated>2009-12-17T16:47:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-12-17:/shunit2-unit-testing-for-shell-scripts.html</id><summary type="html">&lt;p&gt;This may be of interest to people who are as stupid as I am and write
elaborate shell scripts instead of using a proper scripting language such as
Python or Ruby. No I am deliberately not mentioning Perl here.&lt;/p&gt;
&lt;p&gt;Anyway, testing is always an issue. With PPSS, I encountered many …&lt;/p&gt;</summary><content type="html">&lt;p&gt;This may be of interest to people who are as stupid as I am and write
elaborate shell scripts instead of using a proper scripting language such as
Python or Ruby. No I am deliberately not mentioning Perl here.&lt;/p&gt;
&lt;p&gt;Anyway, testing is always an issue. With PPSS, I encountered many times that
some change broke something in another place. Testing if some change screwed
things up manually is just tedious and a pain.&lt;/p&gt;
&lt;p&gt;As any person who is semi-serious about programming should have done already,
I started to write some unit tests. For bash scripts, there is this unit test
framework called &lt;a href="http://code.google.com/p/shunit2/"&gt;Sunit2&lt;/a&gt;. It takes some effort to write tests. But once
you have written some tests and witness how the tests that passed before now
fail after you changed some code is priceless. Because often, you would not
have found the issue before a long time.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Google DNS - what to think of it?</title><link href="https://louwrentius.com/google-dns-what-to-think-of-it.html" rel="alternate"></link><published>2009-12-03T18:42:00+01:00</published><updated>2009-12-03T18:42:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-12-03:/google-dns-what-to-think-of-it.html</id><summary type="html">&lt;p&gt;Google now provide an open DNS service. At first I was scared that they use
their service to get information about users.&lt;/p&gt;
&lt;p&gt;However, their &lt;a href="http://code.google.com/intl/nl-NL/speed/public-dns/privacy.html"&gt;privacy statement&lt;/a&gt; tells us that no personal identifiable
information is stored for longer than 48 hours. The permanent logs do not
contain such information. The most …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Google now provide an open DNS service. At first I was scared that they use
their service to get information about users.&lt;/p&gt;
&lt;p&gt;However, their &lt;a href="http://code.google.com/intl/nl-NL/speed/public-dns/privacy.html"&gt;privacy statement&lt;/a&gt; tells us that no personal identifiable
information is stored for longer than 48 hours. The permanent logs do not
contain such information. The most notable information that is stored
permanently is the requested domain. The other thing is that they record the
location of the request on the city level. And even that info is only stored
for two weeks. After that, only a small random sample of info is stored for
permanent storage.&lt;/p&gt;
&lt;p&gt;The main question is now: who do you trust more: Google or your ISP.&lt;/p&gt;
&lt;p&gt;To be honest, I think I trust Google more than my ISP. I don't think my ISP
adheres to the same privacy policy.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Convert dos file to linux format wit vim</title><link href="https://louwrentius.com/convert-dos-file-to-linux-format-wit-vim.html" rel="alternate"></link><published>2009-12-03T17:12:00+01:00</published><updated>2009-12-03T17:12:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-12-03:/convert-dos-file-to-linux-format-wit-vim.html</id><summary type="html">&lt;p&gt;If you have some file that has been saved on a (Win)do(w)s host you may notice
that all lines end with ^M.&lt;/p&gt;
&lt;p&gt;To fix this, you can use the tool dos2unix. If however this tool is not at
your disposal, you may have a problem. If vim …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you have some file that has been saved on a (Win)do(w)s host you may notice
that all lines end with ^M.&lt;/p&gt;
&lt;p&gt;To fix this, you can use the tool dos2unix. If however this tool is not at
your disposal, you may have a problem. If vim is available, you do not have a
problem:&lt;/p&gt;
&lt;p&gt;The trick is to delete all occurances of ^M with this line:&lt;/p&gt;
&lt;p&gt;:%s/ctrl-v ctrl-m//g&lt;/p&gt;
&lt;p&gt;This will look like:&lt;/p&gt;
&lt;p&gt;:%s/^M//g&lt;/p&gt;
&lt;p&gt;Just save the file and you're done.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Which file system for a large storage array under Linux?</title><link href="https://louwrentius.com/which-file-system-for-a-large-storage-array-under-linux.html" rel="alternate"></link><published>2009-11-30T13:03:00+01:00</published><updated>2009-11-30T13:03:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-30:/which-file-system-for-a-large-storage-array-under-linux.html</id><summary type="html">&lt;p&gt;There are many file systems available under Linux, however only a few of them
can be used for a large storage array.&lt;/p&gt;
&lt;p&gt;I am assuming that you want to create a single file system. I don't care if
you use LVM or other layers beneath, this is about which file …&lt;/p&gt;</summary><content type="html">&lt;p&gt;There are many file systems available under Linux, however only a few of them
can be used for a large storage array.&lt;/p&gt;
&lt;p&gt;I am assuming that you want to create a single file system. I don't care if
you use LVM or other layers beneath, this is about which file system to use.&lt;/p&gt;
&lt;p&gt;I will discuss each file system in short.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;EXT3&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Old and trusted. but slow, wastes tremendous amounts of free space and has an
16 TB limit. Not recommended. At 16 TB, you will lose significant amounts of
free space.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;JFS&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Developed by IBM, it seems an ok file system and it supports 16+ TB file
systems. However it is abandoned by IBM and does not seem to have a future.
The reason I did not choose JFS is because it does not seem to be in wide
spread use and therefore, and I do not want to take any risk in this regard. I
might encounter some rare bug because I would be one of the few Linux jfs
users that creates 16+ TB file systems.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;XFS&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A file systems that works. Supports 16+ TB file systems and is widely used. It
has some drawbacks, such as data loss (files are zeroed) when something goes
wrong (powerloss) however, nobody that is interested in creating a large
filesystem is going to run the box without a UPS.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reiserfs&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The guy who designed and wrote it is in jail for killing his wife. Not much
future here, i suppose. Also, it does not support file systems bigger than 16
TB.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reiser4&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Some new version of Reiserfs which is still not available as part of a stable
Linux kernel.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;EXT4&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Ext3 upgraded to current days need, although with a draw back. On paper, it
supports 16 TB+&lt;/p&gt;
&lt;p&gt;file system, but currently, in practice, it does not. I found out the hard way
and had to reformat my array with another file system.&lt;/p&gt;
&lt;p&gt;My choice:&lt;/p&gt;
&lt;p&gt;Honestly, there is not much of an option currently under Linux other than XFS.
In the past, I had some quirks with file names that contained strange
characters causing trouble. However, I never lost any data whatsoever.&lt;/p&gt;
&lt;p&gt;Currently, what I'm really hoping for is ZFS for Linux or a stable version of
btrfs. These are truly modern day file systems with support for snapshots,
etc. But this is still a dream. Until then, I will stick with XFS.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>'Tip of the day: Scrolling in GNU screen'</title><link href="https://louwrentius.com/tip-of-the-day-scrolling-in-gnu-screen.html" rel="alternate"></link><published>2009-11-30T11:44:00+01:00</published><updated>2009-11-30T11:44:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-30:/tip-of-the-day-scrolling-in-gnu-screen.html</id><summary type="html">&lt;p&gt;Just the answer:&lt;/p&gt;
&lt;p&gt;By default, on Debian, the scroll back buffer is about 1K lines. This can be
changed in the .screenrc file in your home directory. The following example
sets the scroll back buffer to 5K lines.&lt;/p&gt;
&lt;p&gt;defscrollback 5000&lt;/p&gt;
&lt;p&gt;Set the scroll back buffer on the fly with:&lt;/p&gt;
&lt;p&gt;First …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Just the answer:&lt;/p&gt;
&lt;p&gt;By default, on Debian, the scroll back buffer is about 1K lines. This can be
changed in the .screenrc file in your home directory. The following example
sets the scroll back buffer to 5K lines.&lt;/p&gt;
&lt;p&gt;defscrollback 5000&lt;/p&gt;
&lt;p&gt;Set the scroll back buffer on the fly with:&lt;/p&gt;
&lt;p&gt;First enter the command line mode:&lt;/p&gt;
&lt;p&gt;ctrl + a, :&lt;/p&gt;
&lt;p&gt;Then enter:&lt;/p&gt;
&lt;p&gt;scrollback 2000&lt;/p&gt;
&lt;p&gt;To set the scroll back buffer to 2000 lines.&lt;/p&gt;
&lt;p&gt;To actually scroll back, the actuall stuff why you may be reading this post:&lt;/p&gt;
&lt;p&gt;First enter copy mode:&lt;/p&gt;
&lt;p&gt;ctrl + a, [&lt;/p&gt;
&lt;p&gt;Use standard VI controls to navigate through the lines (h j k l).&lt;/p&gt;
&lt;p&gt;Use ctrl+b to scroll a full page up&lt;/p&gt;
&lt;p&gt;Use ctrl+f to scroll a full page down.&lt;/p&gt;
&lt;p&gt;For more details, please visit&lt;a href="http://www.samsarin.com/blog/2007/03/11/gnu-screen-working-with-the-"&gt; this link&lt;/a&gt;:&lt;/p&gt;
&lt;p&gt;scrollback-buffer/&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>'Gzip with parallel compression support: pigz'</title><link href="https://louwrentius.com/gzip-with-parallel-compression-support-pigz.html" rel="alternate"></link><published>2009-11-22T18:29:00+01:00</published><updated>2009-11-22T18:29:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-22:/gzip-with-parallel-compression-support-pigz.html</id><summary type="html">&lt;p&gt;The speed at which files are compressed with gzip is currently almost always
determined by the speed of the CPU. However, standard unix gzip is single-
threaded and only uses a single CPU (core).&lt;/p&gt;
&lt;p&gt;However, the maintainer of the zlib library has released&lt;a href="http://www.zlib.net/pigz/"&gt; 'pigz' or 'pig-zee'
&lt;/a&gt;whichs adds just that …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The speed at which files are compressed with gzip is currently almost always
determined by the speed of the CPU. However, standard unix gzip is single-
threaded and only uses a single CPU (core).&lt;/p&gt;
&lt;p&gt;However, the maintainer of the zlib library has released&lt;a href="http://www.zlib.net/pigz/"&gt; 'pigz' or 'pig-zee'
&lt;/a&gt;whichs adds just that: support for parallel compression. This dramatically
improves the speed at which a file can be gzipped.&lt;/p&gt;
&lt;p&gt;In this example, a 3 GB compressable file is gzipped:&lt;/p&gt;
&lt;p&gt;Gzip:&lt;/p&gt;
&lt;p&gt;root@Core7i:~# time gzip pigz.bin&lt;/p&gt;
&lt;p&gt;real 1m58.994s&lt;/p&gt;
&lt;p&gt;user 1m56.480s&lt;/p&gt;
&lt;p&gt;sys 0m1.820s&lt;/p&gt;
&lt;p&gt;Pigz:&lt;/p&gt;
&lt;p&gt;root@Core7i:~# time pigz pigz.bin&lt;/p&gt;
&lt;p&gt;real 0m31.524s&lt;/p&gt;
&lt;p&gt;user 2m54.890s&lt;/p&gt;
&lt;p&gt;sys 0m2.900s&lt;/p&gt;
&lt;p&gt;This simple and a bit unscientific example shows a 400% speed improvement.
Since the Core i7 has four real cores, this shows that pigz scales nicely.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>How to determine which process causes IO ?</title><link href="https://louwrentius.com/how-to-determine-which-process-causes-io.html" rel="alternate"></link><published>2009-11-22T15:19:00+01:00</published><updated>2009-11-22T15:19:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-22:/how-to-determine-which-process-causes-io.html</id><summary type="html">&lt;p&gt;There is a nifty little program called 'iotop'. Iotop is part of Debian or
Ubuntu and can be installed with a simple apt-get.&lt;/p&gt;
&lt;p&gt;Once you have determined with 'top' that the system is waiting on IO-access,
It is nice to know&lt;/p&gt;
&lt;p&gt;which process is responsibe for this IO. Therefore, you …&lt;/p&gt;</summary><content type="html">&lt;p&gt;There is a nifty little program called 'iotop'. Iotop is part of Debian or
Ubuntu and can be installed with a simple apt-get.&lt;/p&gt;
&lt;p&gt;Once you have determined with 'top' that the system is waiting on IO-access,
It is nice to know&lt;/p&gt;
&lt;p&gt;which process is responsibe for this IO. Therefore, you want a list of
processes, just like top provides, but instead of CPU or RAM usage, it shows
the IO a process is generating. This is exactly what 'iotop' provides.&lt;/p&gt;
&lt;p&gt;This is ideal when troubleshooting performance problems caused by heavy IO.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://guichaz.free.fr/iotop/"&gt;http://guichaz.free.fr/iotop/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt="" src="http://guichaz.free.fr/iotop/iotop_small.png" /&gt;&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Making cowpatty recognize a four-way handshake</title><link href="https://louwrentius.com/making-cowpatty-recognize-a-four-way-handshake.html" rel="alternate"></link><published>2009-11-22T12:52:00+01:00</published><updated>2009-11-22T12:52:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-22:/making-cowpatty-recognize-a-four-way-handshake.html</id><summary type="html">&lt;p&gt;I was unable to get cowpatty working with a packet capture that actually&lt;/p&gt;
&lt;p&gt;contains a four-way handshake of a WPA session.&lt;/p&gt;
&lt;p&gt;I got it working like this:&lt;/p&gt;
&lt;p&gt;First, download cowpatty &lt;a href="http://www.willhackforsushi.com/?page_id=50"&gt;4.6 right here&lt;/a&gt;, within the source directory of
cowpatty.&lt;/p&gt;
&lt;p&gt;Extract cowpatty and a&lt;a href="http://proton.cygnusx-1.org/~edgan/cowpatty/cowpatty-4.6-fixup14.patch"&gt;pply this patch&lt;/a&gt; using &lt;a href="http://forum.aircrack-"&gt;these instructions …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;I was unable to get cowpatty working with a packet capture that actually&lt;/p&gt;
&lt;p&gt;contains a four-way handshake of a WPA session.&lt;/p&gt;
&lt;p&gt;I got it working like this:&lt;/p&gt;
&lt;p&gt;First, download cowpatty &lt;a href="http://www.willhackforsushi.com/?page_id=50"&gt;4.6 right here&lt;/a&gt;, within the source directory of
cowpatty.&lt;/p&gt;
&lt;p&gt;Extract cowpatty and a&lt;a href="http://proton.cygnusx-1.org/~edgan/cowpatty/cowpatty-4.6-fixup14.patch"&gt;pply this patch&lt;/a&gt; using &lt;a href="http://forum.aircrack-"&gt;these instructions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Then build couwpatty just with 'make' and 'make install'.&lt;/p&gt;
&lt;p&gt;I created a test setup with a known password. However, this patched&lt;/p&gt;
&lt;p&gt;version did not find the passphrase using a dictionary file.&lt;/p&gt;
&lt;p&gt;I then used genpmk to create a precomputed hash database like this:&lt;/p&gt;
&lt;p&gt;genpmk -f ./aircrack-ng-1.0/test/password.lst -d hashes -s default&lt;/p&gt;
&lt;p&gt;Please note that I added the correct passphrase to this password list to&lt;/p&gt;
&lt;p&gt;make sure that cowpatty works.&lt;/p&gt;
&lt;p&gt;Finally, using the -d option on the hash file, cowpatty&lt;/p&gt;
&lt;p&gt;managed to crack the PSK.&lt;/p&gt;
&lt;p&gt;ng.org/index.php?PHPSESSID=9e79f0ff83630bbe6c870b1a9aeef63c&amp;amp;topic=5867.0&lt;/p&gt;</content><category term="Uncategorized"></category><category term="wpa"></category><category term="cowpatty"></category><category term="four-way"></category><category term="handshake"></category><category term="crack"></category><category term="aircrack"></category></entry><entry><title>Monitor power usage with your UPS</title><link href="https://louwrentius.com/monitor-power-usage-with-your-ups.html" rel="alternate"></link><published>2009-11-18T21:09:00+01:00</published><updated>2009-11-18T21:09:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-18:/monitor-power-usage-with-your-ups.html</id><summary type="html">&lt;p&gt;If a system is connected to a UPS (Uninterruptible Power Supply), it is
possible to determine how much power it consumes. For this purpose, I wrote a
small script:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt; Host:~# ./ups.sh

 UPS model: Back-UPS RS 1200 LCD
 APC model: Back-UPS RS 1200 LC
 Capacity: 720 Watt
 Load: 18 Percent …&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</summary><content type="html">&lt;p&gt;If a system is connected to a UPS (Uninterruptible Power Supply), it is
possible to determine how much power it consumes. For this purpose, I wrote a
small script:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt; Host:~# ./ups.sh

 UPS model: Back-UPS RS 1200 LCD
 APC model: Back-UPS RS 1200 LC
 Capacity: 720 Watt
 Load: 18 Percent
 Usage: 129 Watt
 Time left: 33 Minutes
 Status: ONLINE

 Host:~# ./ups.sh

 UPS model: Back-UPS RS 1200 LCD
 APC model: Back-UPS RS 1200 LC
 Capacity: 720 Watt
 Load: 19 Percent
 Usage: 136 Watt
 Time left: 22 Minutes
 Status: ONBATT
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This script assumes that:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You are running a unix&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You run apcupsd&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The script can be downloaded &lt;a href="/files/ups.sh"&gt;here&lt;/a&gt;:&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Linux"></category><category term="apcupsd"></category><category term="ups"></category><category term="script"></category><category term="power"></category><category term="usage"></category></entry><entry><title>24 TB based on Norco RPC-4020 and Linux Software RAID</title><link href="https://louwrentius.com/24-tb-based-on-norco-rpc-4020-and-linux-software-raid.html" rel="alternate"></link><published>2009-11-14T13:07:00+01:00</published><updated>2009-11-14T13:07:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-14:/24-tb-based-on-norco-rpc-4020-and-linux-software-raid.html</id><content type="html">&lt;p&gt;Just a quick link:&lt;/p&gt;
&lt;p&gt;Some person build basically the same setup, including identical controller,
providing 28 TB of storage: &lt;a href="http://www.hardforum.com/showthread.php?t=1393939&amp;amp;page=25"&gt;Take a look here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The main difference is that this person uses 1.5 TB disk, thus achieving more
storage.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Debian Linux Preseeding - an installation framework</title><link href="https://louwrentius.com/debian-linux-preseeding-an-installation-framework.html" rel="alternate"></link><published>2009-11-11T21:11:00+01:00</published><updated>2009-11-11T21:11:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-11:/debian-linux-preseeding-an-installation-framework.html</id><summary type="html">&lt;p&gt;Look Ma!&lt;/p&gt;
&lt;p&gt;No hands!&lt;/p&gt;
&lt;p&gt;just watching a system PXE-boot and install itself into a fully operational
system is fun. PXE boot and preseeding is not enough however to accomplish
this.&lt;/p&gt;
&lt;p&gt;By itself, preseeding only installs a basic operating system. Next, you need
to configure all sorts of things and services …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Look Ma!&lt;/p&gt;
&lt;p&gt;No hands!&lt;/p&gt;
&lt;p&gt;just watching a system PXE-boot and install itself into a fully operational
system is fun. PXE boot and preseeding is not enough however to accomplish
this.&lt;/p&gt;
&lt;p&gt;By itself, preseeding only installs a basic operating system. Next, you need
to configure all sorts of things and services, install additional software,
create accounts. etc.&lt;/p&gt;
&lt;p&gt;So there is a whole post-install process that you also need to automate.
Preseeding by itself, as I understand it, does not seem to be able to do that
for you. But you can execute commands at the end of the installation.&lt;/p&gt;
&lt;p&gt;First, I'd like to point out that the alternative to Debian preseeding is, in
my opinion, &lt;a href="http://www.informatik.uni-koeln.de/fai/"&gt;FAI or Fully Automated Installation&lt;/a&gt;. I want to make clear
that this is a product with a long history that may better suit your needs. I
do not use FAI because I find it way to complicated. It seems very powerful to
me, but I don't understand it.&lt;/p&gt;
&lt;p&gt;I understand preseeding and shell scripts. Putting commands that I would
otherwise execute manually are now performed by some shell script. And I think
every administrator understands shell scripts and will be able to automate
installation with almost no effort by just executing a bunch of shell scripts
that install various additional components.&lt;/p&gt;
&lt;p&gt;I wrote a small installation 'framework' that can be used in conjunction with
preseeding to truly fully automate your installation, without the requirement
of learning complex stuff like FAI.&lt;/p&gt;
&lt;p&gt;Unfortunately, it is not available yet on-line, but as soon as I find the
time, a new project will be started and this small installation framework will
be available with some example preseed configuration.&lt;/p&gt;
&lt;p&gt;In day-to-day operations I use this script to install servers and laptops.
Once you see it you will like it.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Linux Mac Mini - temperature monitoring with lm-sensors</title><link href="https://louwrentius.com/linux-mac-mini-temperature-monitoring-with-lm-sensors.html" rel="alternate"></link><published>2009-11-09T01:21:00+01:00</published><updated>2009-11-09T01:21:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-09:/linux-mac-mini-temperature-monitoring-with-lm-sensors.html</id><summary type="html">&lt;p&gt;This post is about getting temperature monitoring to work with a Mac Mini
running Linux.&lt;/p&gt;
&lt;p&gt;Using Debian Lenny, out of the box, lm-sensors is not working. No sensors can
be found. This is how temperature monitoring and fan speed monitoring can be
made to work:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;modprobe applesmc&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you run …&lt;/p&gt;</summary><content type="html">&lt;p&gt;This post is about getting temperature monitoring to work with a Mac Mini
running Linux.&lt;/p&gt;
&lt;p&gt;Using Debian Lenny, out of the box, lm-sensors is not working. No sensors can
be found. This is how temperature monitoring and fan speed monitoring can be
made to work:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;modprobe applesmc&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you run "sensors-detect" after this, and do a:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;modprobe coretemp&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Then "sensors" will give you ouput like this:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Mini:/sys/devices/platform# sensors&lt;/p&gt;
&lt;p&gt;applesmc-isa-0300&lt;/p&gt;
&lt;p&gt;Adapter: ISA adapter&lt;/p&gt;
&lt;p&gt;Master : 2151 RPM (min = 1500 RPM)&lt;/p&gt;
&lt;p&gt;temp1: +83.2°C&lt;/p&gt;
&lt;p&gt;temp2: +68.0°C&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Some software to control the fan speed:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://stargate.solsys.org/mod.php?mod=faq&amp;amp;op=extlist&amp;amp;topicid=27&amp;amp;expan"&gt;http://stargate.solsys.org/mod.php?mod=faq&amp;amp;op=extlist&amp;amp;topicid=27&amp;amp;expand=yes#1
18&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;d=yes#118&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Linux on Mac Mini - boot after power failure</title><link href="https://louwrentius.com/linux-on-mac-mini-boot-after-power-failure.html" rel="alternate"></link><published>2009-11-08T20:57:00+01:00</published><updated>2009-11-08T20:57:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-08:/linux-on-mac-mini-boot-after-power-failure.html</id><summary type="html">&lt;p&gt;When using a Mac Mini as a server or router, it is very nice if the machine
automatically boots if a power failure has occurred.&lt;/p&gt;
&lt;p&gt;User chirhoxi on the ubuntu forum found out how this can be achieved:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://ubuntuforums.org/showthread.php?t=1209576"&gt;http://ubuntuforums.org/showthread.php?t=1209576&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Basically you need one of …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When using a Mac Mini as a server or router, it is very nice if the machine
automatically boots if a power failure has occurred.&lt;/p&gt;
&lt;p&gt;User chirhoxi on the ubuntu forum found out how this can be achieved:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://ubuntuforums.org/showthread.php?t=1209576"&gt;http://ubuntuforums.org/showthread.php?t=1209576&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Basically you need one of these commands:&lt;/p&gt;
&lt;p&gt;Original Mac Mini:&lt;/p&gt;
&lt;p&gt;setpci -s 00:03.0 0xa4.b=0&lt;/p&gt;
&lt;p&gt;Newer Mac Mini:&lt;/p&gt;
&lt;p&gt;setpci -s 00:03.0 0x7b.b=19&lt;/p&gt;
&lt;p&gt;These commands might not work with the latest Mac Minis but the thread
discusses how to determine for yourself what the appropriate command is.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="mac"></category><category term="mini"></category><category term="boot"></category><category term="after"></category><category term="power"></category><category term="failure"></category><category term="loss"></category><category term="reboot"></category></entry><entry><title>How to run Debian Linux on an Intel based Mac Mini</title><link href="https://louwrentius.com/how-to-run-debian-linux-on-an-intel-based-mac-mini.html" rel="alternate"></link><published>2009-11-01T23:22:00+01:00</published><updated>2009-11-01T23:22:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-11-01:/how-to-run-debian-linux-on-an-intel-based-mac-mini.html</id><summary type="html">&lt;p&gt;The Mac Mini is just a gorgeous device. It is beautiful, small, silent,
powerfull yet energy efficient. When idle, it uses around 20 watts. I'm using
one of the first Intel-based Minis with an Intel Core Duo chip, running at 1.6
Ghz.&lt;/p&gt;
&lt;p&gt;I want to use this mini as …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The Mac Mini is just a gorgeous device. It is beautiful, small, silent,
powerfull yet energy efficient. When idle, it uses around 20 watts. I'm using
one of the first Intel-based Minis with an Intel Core Duo chip, running at 1.6
Ghz.&lt;/p&gt;
&lt;p&gt;I want to use this mini as an expensive router and download host. I could have
used something embedded, such as one of those router boxes that costs about 70
euros, but no, I want to do some more with my router, such as downloading,
etc. It is the only device in my house that is allowed to run 24/7 so it has
to be a bit more powerful if I want more than just routing. I know that this
mini was like 600 euros or something back in the days, and that is quite some
money to spend on something that is now only a router. However, when I was
still running Mac OS X on it, I didn't do much more with it than I will now,
it will actually do more.&lt;/p&gt;
&lt;p&gt;I am assuming that you want to run Linux exclusively on the Mac and that Mac
OS X will be wiped off.&lt;/p&gt;
&lt;p&gt;To get this puppy running Debian Linux (Lenny), you need to first boot the Mac
with the (Snow) Leopard OS X boot CD and startup the diskutilily.&lt;/p&gt;
&lt;p&gt;You need to create at least two partitions: one for the root file system and
one for swap. The most important step is to select 'options' under the
partition layout screen, and select Master Boot Record partitioning instead of
the other 2 options. Do NOT use GUID or Apple Partition Map.&lt;/p&gt;
&lt;p&gt;&lt;img alt="" src="https://louwrentius.files.com/2009/11/mbr.png?w=300" /&gt;&lt;/p&gt;
&lt;p&gt;Now, boot your regular Debian Linux boot CD, I use the regular network
installation CD. When you get to the partitioning screen, do NOT auto-
partition the hard disk. Just reconfigure the existing partitions you just
made using Diskutility. So the large partition will be configured as "/" and
made bootable. The small partition must be configured as swap.&lt;/p&gt;
&lt;p&gt;After the installation finishes, just install GRUB in the MBR and reboot. If
all went alright, you will see a non-blinking folder on a gray background for
a couple of seconds, after which Linux will boot. If you get a blinking gray
folder with a question mark, something went wrong.&lt;/p&gt;
&lt;p&gt;It seems that if configured properly, after the EFI boot mechanism fails to
find a system folder on some Mac partition, the legacy BIOS emulation seems to
kick in, and star to search for something to boot.&lt;/p&gt;
&lt;p&gt;The Mini has only one network card, so another one is necessary to run it as a
router. I bought some no brand USB2 to 100 MBIT NIC (Bus 005 Device 003: ID
9710:7830 MosChip Semiconductor MCS7830 Ethernet) which seems to run smoothly.&lt;/p&gt;
&lt;p&gt;card with a dual &lt;a href="http://www.globalamericaninc.com/p1507790/1507790_-__Mini-"&gt;gigabit card&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I guess you will need to mod the Mini but it will allow true gigabit speeds on
all interfaces.&lt;/p&gt;
&lt;p&gt;PCI_Express_Dual_Gigabit_LAN_Module/product_info.html&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Debian"></category><category term="Linux"></category><category term="Lenny"></category><category term="Mac"></category><category term="Mini"></category><category term="EFI"></category><category term="Disk"></category><category term="utility"></category><category term="Master"></category><category term="Boot"></category><category term="Record"></category></entry><entry><title>The security risk of vendor-supplied default SSL certificates</title><link href="https://louwrentius.com/the-security-risk-of-vendor-supplied-default-ssl-certificates.html" rel="alternate"></link><published>2009-10-30T20:09:00+01:00</published><updated>2009-10-30T20:09:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-10-30:/the-security-risk-of-vendor-supplied-default-ssl-certificates.html</id><summary type="html">&lt;p&gt;Often, software comes supplied with some default SSL certificate, for testing
purposes, such as those 'snake oil' certificates (they are called snake oil
certificates for a reason). In practice, I often encounter usage of such
certificates. People may seem to think that as long SSL is used,
authentication and thus …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Often, software comes supplied with some default SSL certificate, for testing
purposes, such as those 'snake oil' certificates (they are called snake oil
certificates for a reason). In practice, I often encounter usage of such
certificates. People may seem to think that as long SSL is used,
authentication and thus credentials are safe, but nothing could be further
from the truth.&lt;/p&gt;
&lt;p&gt;If you encounter a service that uses a default vendor-supplied SSL
certificate, decryption of communication is trivial. Just obtain a copy of
this vendor software and grab the private key. This private key can be loaded
into Wireshark to decrypt any captured SSL traffic that has been encrypted
with this certificate. Please read &lt;a href="http://wiki.wireshark.org/SSL"&gt;this link&lt;/a&gt; about decrypting SSL with
Wireshark.&lt;/p&gt;
&lt;p&gt;So it is important to always replace default SSL certificates with a freshly
generated, no matter if it is self-signed or not.&lt;/p&gt;</content><category term="Security"></category><category term="default"></category><category term="ssl"></category><category term="key"></category><category term="vendor-supplied"></category><category term="decrypt"></category><category term="wireshark"></category></entry><entry><title>Blu Ray is dead</title><link href="https://louwrentius.com/blu-ray-is-dead.html" rel="alternate"></link><published>2009-10-05T19:12:00+02:00</published><updated>2009-10-05T19:12:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-10-05:/blu-ray-is-dead.html</id><summary type="html">&lt;p&gt;HD-DVD may be dead, but Blu Ray is just as dead. The whole concept of optical
media is dead. Honestly, who is still burning CD's or DVD's nowadays? (If you
are, why for Christ sake? I can't think of a single good reason) And at 10
euros ($100) for a …&lt;/p&gt;</summary><content type="html">&lt;p&gt;HD-DVD may be dead, but Blu Ray is just as dead. The whole concept of optical
media is dead. Honestly, who is still burning CD's or DVD's nowadays? (If you
are, why for Christ sake? I can't think of a single good reason) And at 10
euros ($100) for a single Blu Ray disk, you must be totally bonkers to buy one
of them recorders.&lt;/p&gt;
&lt;p&gt;I mean, let's face it, CD's were really cool in an age where 650 MB was way
more than the 40 or 80 MB hard drive in your computer. It made a difference.
That was already less so with a DVD, with a capacity of 'only' 4 GB. However,
in the early years of the DVD, you could backup your entire hard drive on two,
maybe three disks, since a hard drive averaged around 4 to 10 GB at that time.&lt;/p&gt;
&lt;p&gt;Then finally came Blu Ray and HD-DVD. A whopping 25 GB on a disk. No shit!.
You mean, like I need no less than 40 Blu Ray disks (400 euros) and an eon of
burning disks to backup one of my 1 TB hard drives that cost me like 70 euros?&lt;/p&gt;
&lt;p&gt;What must they have been thinking when they developed Blu Ray? As a backup
medium it is useless, but it was ofcourse intended as a carrier for movies, I
know. But why should I go outside, through the cold and the rain to go to some
shitty video store that only has last years block busters? Why bother with 40
Mbit downstream and the Internet at your disposal? Downloading a movie will
take me as much time as going to the video store and selecting something so
dreadful even the DVD player will refuse to play it. What else is there to
choose from. The Internet provides us with the most rare and obscure but most
beautifull movies you ever saw. And the more main stream movies can be
obtained in full HD 1080p.&lt;/p&gt;
&lt;p&gt;Current Internet connections are of such quality, that physical media such as
Blu Ray disk are becomming irrelevant. A normal DVD is downloaded within 20
minutes at 4 MB/s. And when Internet connections will reach 100 Mbit or 12
MB/s, even a HD 1080p movie will be downloaded within the hour.&lt;/p&gt;
&lt;p&gt;Storage is not a problem. If you can store 40 HD movies on a single 1 TB disk,
then you will pay 1,5 euros for each movie. Beats any Blu Ray disk in price
and time. If you even care about them that much, buy or build a NAS and store
them on some redundant storage.&lt;/p&gt;
&lt;p&gt;In time people will do with DVDs and CDs what most people already do with CDs:
rip them to some format your computer understands (to transfer it to your MP3
player) and get rid of that CD that will become scratched and useless even if
you don't touch it. No, you don't want to transcode your HD movies to some
iPod or something, but you may want to stream it to your media player in the
living room? A jukebox full of films, just as you have a jukebox full of
music.&lt;/p&gt;
&lt;p&gt;Give the CD, DVD and Blu Ray a little push, and let them fall into the grave.
It is an outdated technology for a problem that existed 10 years ago. The
Internet has made it irrelevant. Be done with it.&lt;/p&gt;</content><category term="Hardware"></category><category term="blu"></category><category term="ray"></category><category term="dead"></category></entry><entry><title>PPSS version 2.30 now operates asynchronous</title><link href="https://louwrentius.com/ppss-version-230-now-operates-asynchronous.html" rel="alternate"></link><published>2009-09-26T19:32:00+02:00</published><updated>2009-09-26T19:32:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-09-26:/ppss-version-230-now-operates-asynchronous.html</id><summary type="html">&lt;p&gt;If you background a bash or shell process, how do you determine if it has
finished? Since inter process communication is not possible using shell
scripts, people often refer to while loops or other polling mechanisms to
determine if some process has stopped.&lt;/p&gt;
&lt;p&gt;However, the one player that knows best …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you background a bash or shell process, how do you determine if it has
finished? Since inter process communication is not possible using shell
scripts, people often refer to while loops or other polling mechanisms to
determine if some process has stopped.&lt;/p&gt;
&lt;p&gt;However, the one player that knows best if a process has finished is the
process itself. So if only this process could tell the parent or other process
about this...&lt;/p&gt;
&lt;p&gt;The solution is using a FIFO or 'pipe'. A listener process reads the pipe and
executes a command for every message received through this pipe. This was
already build-in into PPSS. However, PPSS had this dirty while loop that polls
every x seconds to determine if there are still running workers. If not, PPSS
finishes itself.&lt;/p&gt;
&lt;p&gt;However, while loops and polling mechanisms are evil, dirty and bad. The
nicest solution is to make PPSS fully asynchronous. To achieve this, every job
must tell PPSS that is has finished. PPSS already has this listening process
that listens to the pipe for commands. If this channel is used by workers to
communicate that a worker is finished, PPSS will know when all workers are
finished.&lt;/p&gt;
&lt;p&gt;This makes PPSS a lot faster and responsive.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="bash"></category><category term="inter"></category><category term="process"></category><category term="communication"></category><category term="asynchronous"></category><category term="FIFO"></category></entry><entry><title>Laptop or netbook as router?</title><link href="https://louwrentius.com/laptop-or-netbook-as-router.html" rel="alternate"></link><published>2009-09-20T11:19:00+02:00</published><updated>2009-09-20T11:19:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-09-20:/laptop-or-netbook-as-router.html</id><summary type="html">&lt;p&gt;If you want a router for distribution of internet to your computers at home,
there are several options.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;buy some embedded device from Linksys, Draytek, Asus, 3com, ZyXtel or
netgear&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This type of hardware is cheap, economical, and gets you up and running in a
few minutes. The downside is …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you want a router for distribution of internet to your computers at home,
there are several options.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;buy some embedded device from Linksys, Draytek, Asus, 3com, ZyXtel or
netgear&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This type of hardware is cheap, economical, and gets you up and running in a
few minutes. The downside is that you can't do much else with these things.
Yes, there are many custom firmwares, which allow you more freedom, but the
hardware is often the limiting factor.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;convert a regular PC into a router&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you want more than just routing, building your own router using a(n) (old)
PC is the preferred course of action. The downside is that a PC often uses
more 'juice' than an embedded device.&lt;/p&gt;
&lt;p&gt;However, those new Atom-based PC's may be a very nice option. Just add a
second network card, though an USB-port or a low profile PCI card
and you have something far more flexible than an embedded router.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;convert a laptop into a router&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It sounds a bit strange and silly at first to use a laptop as a router, but it
makes sense when you think of it.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It is economical in terms of power usage&lt;/li&gt;
&lt;li&gt;It has a build-in UPS called a 'battery'&lt;/li&gt;
&lt;li&gt;It has a build-in screen and keyboard&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;All these things are an advantage regarding option 2.&lt;/p&gt;
&lt;p&gt;Nowadays you can have a netbook for only 300 euros. It is more expensive than
an embedded device, but almost as economical and provides much more
performance and flexibility.&lt;/p&gt;
&lt;p&gt;I've been running an old laptop as a router for 6 months without problems.
Unfortunately, the disk died due to old age, but that can happen to any
computer. I'm now running an old mac mini intel machine as a router. &lt;/p&gt;</content><category term="Networking"></category><category term="laptop"></category><category term="netbook"></category><category term="router"></category><category term="battery"></category><category term="screen"></category><category term="keyboard"></category><category term="ion"></category><category term="atom"></category></entry><entry><title>How to build an energy efficient computer for home use</title><link href="https://louwrentius.com/how-to-build-an-energy-efficient-computer-for-home-use.html" rel="alternate"></link><published>2009-09-19T19:17:00+02:00</published><updated>2009-09-19T19:17:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-09-19:/how-to-build-an-energy-efficient-computer-for-home-use.html</id><summary type="html">&lt;p&gt;In short:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Buy whatever you fucking want.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Turn the fucking thing off when you're not using it.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Long:&lt;/p&gt;
&lt;p&gt;People are spending a lot of time building an energy efficient home computer,
that can act as an HTPC, NAS, or whatever. It must consume as little power as
possible, because it …&lt;/p&gt;</summary><content type="html">&lt;p&gt;In short:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Buy whatever you fucking want.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Turn the fucking thing off when you're not using it.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Long:&lt;/p&gt;
&lt;p&gt;People are spending a lot of time building an energy efficient home computer,
that can act as an HTPC, NAS, or whatever. It must consume as little power as
possible, because it it will be on 24/7.&lt;/p&gt;
&lt;p&gt;Why the fuck do you want to leave a system on 24/7 at home?&lt;/p&gt;
&lt;p&gt;Unless you're unemployed you will be at work most of the time. And during that
time, this machine is doing absolutely nothing.&lt;/p&gt;
&lt;p&gt;There may only be one system that stays on 24/7 and that is your router,
either some embedded router thingy or something based on a low-power PC, but
that's about it. Turn every thing else off.&lt;/p&gt;
&lt;p&gt;A machine that does 200 Watt idle that is only turned on when necessary will
be more energy efficient than your specially build 40 watt NAS or whatever it
is that is running 24/7. Nothing can beat a system that is turned off.&lt;/p&gt;
&lt;p&gt;If you need a system, just use wake-on-lan or WOL to turn the damn thing on.
It will be ready in about 2 or 3 minutes and then you can do whatever you
want.&lt;/p&gt;
&lt;p&gt;Fine if you leave a system on that is downloading some stuff during the night
or day, but after the download has finished, turn the damn thing off.&lt;/p&gt;
&lt;p&gt;If you're honest with yourself and really think about it, there is no need to
keep a system on for 24/7. I know you may come up with excuses, but remember
that you still can use your router for that.&lt;/p&gt;
&lt;p&gt;By the way, if you're searching for a really energy efficient computer, buy a
Mac Mini. Although quite expensive if you ask me, the're doing 30 watt in idle
and almost nothing when sleeping. And if a mac is asleep, it can be woken with
WOL and is up in seconds. They make an excelent download server.&lt;/p&gt;
&lt;p&gt;[EDIT]&lt;/p&gt;
&lt;p&gt;There is also the option of S3 or S4 under Linux: suspend to ram or disk.
However, your mileage may vary. If it works for you your system will be down
and up in seconds. However, my experience is that it very much depends on your
hardware if this will work. My Highpoint cards do not seem to like it, my
array does not awake and the screen of the system stays blank.&lt;/p&gt;
&lt;p&gt;If it works, it is a faster solution than just to turn the system off and on
with WOL. My experience is that is seems not that robust.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="energy"></category><category term="efficient"></category><category term="home"></category><category term="server"></category><category term="nas"></category><category term="htpc"></category><category term="router"></category><category term="24/7"></category><category term="wake"></category><category term="on"></category><category term="lan"></category><category term="wol"></category></entry><entry><title>Visual representation of hard drives and their temperature</title><link href="https://louwrentius.com/visual-representation-of-hard-drives-and-their-temperature.html" rel="alternate"></link><published>2009-09-12T23:53:00+02:00</published><updated>2009-09-12T23:53:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-09-12:/visual-representation-of-hard-drives-and-their-temperature.html</id><summary type="html">&lt;blockquote&gt;
&lt;p&gt;If you build a NAS with many drives, it may be of interest to you which
drives get hot and where they are located in the chassis. My Norco 4020 case
has twenty drives in RAID 6, plus two operating system drives in RAID 1. I
wrote a script that …&lt;/p&gt;&lt;/blockquote&gt;</summary><content type="html">&lt;blockquote&gt;
&lt;p&gt;If you build a NAS with many drives, it may be of interest to you which
drives get hot and where they are located in the chassis. My Norco 4020 case
has twenty drives in RAID 6, plus two operating system drives in RAID 1. I
wrote a script that shows me the temperature of each drive, positioned in such
a way that it represents the actual physical location of the drive in the
chassis:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt; -- 30 -- | -- 27 --
 | 26 | 27 | 28 | 25 |
 | 30 | 32 | 30 | 29 |
 | 31 | 33 | 33 | 29 |
 | 33 | 32 | 35 | 30 |
 | 32 | 34 | 35 | 31 |
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;In this example, you can determine that the top drives seem to stay the
coolest. The center and lower drives get hotter.&lt;/p&gt;
&lt;p&gt;If you want to use this script, you will have to change it for your own
specific setup. You can get it &lt;a href="/files/show-hdd-temp.tgz"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;This visual representation would also be nice to identify which drive has
failed and where it is located in the chassis.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>'Linux: obtain motherboard model / type and vendor'</title><link href="https://louwrentius.com/linux-obtain-motherboard-model-type-and-vendor.html" rel="alternate"></link><published>2009-08-30T21:16:00+02:00</published><updated>2009-08-30T21:16:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-08-30:/linux-obtain-motherboard-model-type-and-vendor.html</id><summary type="html">&lt;p&gt;If you want to know what motherboard is installed in a system, use the tool
dmidecode:&lt;/p&gt;
&lt;p&gt;dmidecode | grep -e "Manufacturer|Product" | head -n 4 | tail -n 2&lt;/p&gt;
&lt;p&gt;The result might be something like:&lt;/p&gt;
&lt;p&gt;Manufacturer: ASUSTeK Computer INC.&lt;/p&gt;
&lt;p&gt;Product Name: P5Q-EM DO&lt;/p&gt;
&lt;p&gt;Manufacturer: ASUSTeK Computer INC.&lt;/p&gt;
&lt;p&gt;Product Name: P6T DELUXE&lt;/p&gt;
&lt;p&gt;Manufacturer …&lt;/p&gt;</summary><content type="html">&lt;p&gt;If you want to know what motherboard is installed in a system, use the tool
dmidecode:&lt;/p&gt;
&lt;p&gt;dmidecode | grep -e "Manufacturer|Product" | head -n 4 | tail -n 2&lt;/p&gt;
&lt;p&gt;The result might be something like:&lt;/p&gt;
&lt;p&gt;Manufacturer: ASUSTeK Computer INC.&lt;/p&gt;
&lt;p&gt;Product Name: P5Q-EM DO&lt;/p&gt;
&lt;p&gt;Manufacturer: ASUSTeK Computer INC.&lt;/p&gt;
&lt;p&gt;Product Name: P6T DELUXE&lt;/p&gt;
&lt;p&gt;Manufacturer: ASUSTeK Computer INC.&lt;/p&gt;
&lt;p&gt;Product Name: M2A-VM&lt;/p&gt;
&lt;p&gt;Manufacturer: Apple Computer, Inc.&lt;/p&gt;
&lt;p&gt;Product Name: Mac-F4208EC8&lt;/p&gt;
&lt;p&gt;Manufacturer: Compaq&lt;/p&gt;
&lt;p&gt;Product Name: 0688h&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Linuxmanufacturermodelmotherboardtypevendor"></category></entry><entry><title>'lm-sensors: hardware monitoring with the w83627ehf module'</title><link href="https://louwrentius.com/lm-sensors-hardware-monitoring-with-the-w83627ehf-module.html" rel="alternate"></link><published>2009-08-30T16:26:00+02:00</published><updated>2009-08-30T16:26:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-08-30:/lm-sensors-hardware-monitoring-with-the-w83627ehf-module.html</id><summary type="html">&lt;p&gt;I have two systems that use the w83627ehf driver for hardware monitoring.
However, if this driver is installed with a regular modprobe like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;modprobe w83627ehf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The result will be:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;FATAL&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;inserting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;w83627ehf&lt;/span&gt;
&lt;span class="o"&gt;(/&lt;/span&gt;&lt;span class="n"&gt;lib&lt;/span&gt;&lt;span class="sr"&gt;/modules/2.6.28-1-amd64/kernel/drivers/hwmon/&lt;/span&gt;&lt;span class="n"&gt;w83627ehf&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ko&lt;/span&gt;&lt;span class="o"&gt;):&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;No&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;such&lt;/span&gt;
&lt;span class="n"&gt;device&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I had this issue …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I have two systems that use the w83627ehf driver for hardware monitoring.
However, if this driver is installed with a regular modprobe like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;modprobe w83627ehf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The result will be:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;FATAL&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;inserting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;w83627ehf&lt;/span&gt;
&lt;span class="o"&gt;(/&lt;/span&gt;&lt;span class="n"&gt;lib&lt;/span&gt;&lt;span class="sr"&gt;/modules/2.6.28-1-amd64/kernel/drivers/hwmon/&lt;/span&gt;&lt;span class="n"&gt;w83627ehf&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ko&lt;/span&gt;&lt;span class="o"&gt;):&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;No&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;such&lt;/span&gt;
&lt;span class="n"&gt;device&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I had this issue with the following mainboards:&lt;/p&gt;
&lt;p&gt;Asus P5Q-EM DO (Core 2 duo)
Asus: P6T deluxe (Core 7i)&lt;/p&gt;
&lt;p&gt;The solution is simple, as I found after googling for some time:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;modprobe w83627ehf force_id=0x8860
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The result:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;w83627ehf-isa-0290
Adapter: ISA adapter
VCore: +0.88 V (min = +0.00 V, max = +1.74 V)
in1: +11.67 V (min = +0.00 V, max = +5.49 V) ALARM
AVCC: +3.34 V (min = +0.40 V, max = +0.03 V) ALARM
3VCC: +3.31 V (min = +1.15 V, max = +0.58 V) ALARM
in4: +1.65 V (min = +0.54 V, max = +0.22 V) ALARM
in5: +2.04 V (min = +0.71 V, max = +0.11 V) ALARM
in6: +3.79 V (min = +0.10 V, max = +2.30 V) ALARM
VSB: +3.38 V (min = +0.45 V, max = +2.43 V) ALARM
VBAT: +3.30 V (min = +0.67 V, max = +0.77 V) ALARM
in9: +0.00 V (min = +1.10 V, max = +0.57 V) ALARM
Case Fan: 0 RPM (min = 104 RPM, div = 128) ALARM
CPU Fan: 0 RPM (min = 42187 RPM, div = 32) ALARM
Aux Fan: 0 RPM (min = 162 RPM, div = 128) ALARM
fan4: 0 RPM (min = 42187 RPM, div = 32) ALARM
fan5: 0 RPM (min = 42187 RPM, div = 32) ALARM
Sys Temp: +40.0°C (high = +4.0°C, hyst = +0.0°C) ALARM sensor = thermistor
CPU Temp: +42.0°C (high = +80.0°C, hyst = +75.0°C) sensor = diode
AUX Temp: +30.5°C (high = +80.0°C, hyst = +75.0°C) sensor = thermistor
cpu0_vid: +0.000 V
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Using ncat to provide SSL-support to non-ssl capable software</title><link href="https://louwrentius.com/using-ncat-to-provide-ssl-support-to-non-ssl-capable-software.html" rel="alternate"></link><published>2009-08-21T15:49:00+02:00</published><updated>2009-08-21T15:49:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-08-21:/using-ncat-to-provide-ssl-support-to-non-ssl-capable-software.html</id><summary type="html">&lt;p&gt;Sometimes, people are using software that does not support encrypted
connections using SSL. To provide SSL-support to such a client, ncat can be
used. Ncat is part of nmap, the famous port-scanner.&lt;/p&gt;
&lt;p&gt;The main principle is that the non-ssl capable software does not connect to
the SSL-based service, but to …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Sometimes, people are using software that does not support encrypted
connections using SSL. To provide SSL-support to such a client, ncat can be
used. Ncat is part of nmap, the famous port-scanner.&lt;/p&gt;
&lt;p&gt;The main principle is that the non-ssl capable software does not connect to
the SSL-based service, but to the local host. Ncat will be listening on the
localhost and will setup an SSL-connection with the SSL-based service on
behalf of the non-ssl capable software.&lt;/p&gt;
&lt;p&gt;This simple command allows an application to browse to port 80, and perform
regular HTTP-request, while in fact, they are encapsulated within a SSL-
connection:&lt;/p&gt;
&lt;p&gt;ncat -l 80 -c "ncat (ip-address) (port) --ssl"&lt;/p&gt;
&lt;p&gt;The -l option specifies the local port on which the SSL-tunnel will be
listening. The ip-address and port refer to the SSL-based service.&lt;/p&gt;
&lt;p&gt;So if the client connects to 127.0.0.1 on port 80 it will actually connect
through the SSL-tunnel to the external service.&lt;/p&gt;
&lt;p&gt;Often stunnel is used for this job but this software craps out on debian Etch
with some error like:&lt;/p&gt;
&lt;p&gt;SSL routines:SSL3_GET_RECORD:bad decompression&lt;/p&gt;
&lt;p&gt;But ncat is an excellent alternative.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Howto get the hard disk size under Linux?</title><link href="https://louwrentius.com/howto-get-the-hard-disk-size-under-linux.html" rel="alternate"></link><published>2009-08-18T21:19:00+02:00</published><updated>2009-08-18T21:19:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-08-18:/howto-get-the-hard-disk-size-under-linux.html</id><summary type="html">&lt;p&gt;A: There is no single tool for this job, but it seems that Fdisk is just fine:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;server:~# fdisk -l 2&amp;gt; /dev/null | grep Disk | grep -v identifier&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Disk /dev/sda: 500.0 GB, 500028145664 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdb: 500.0 GB, 500028145664 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdc: 1000.1 GB …&lt;/p&gt;</summary><content type="html">&lt;p&gt;A: There is no single tool for this job, but it seems that Fdisk is just fine:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;server:~# fdisk -l 2&amp;gt; /dev/null | grep Disk | grep -v identifier&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Disk /dev/sda: 500.0 GB, 500028145664 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdb: 500.0 GB, 500028145664 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdc: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdd: 500.0 GB, 500028145664 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sde: 500.0 GB, 500028145664 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/hda: 80.0 GB, 80026361856 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/md5: 1500.0 GB, 1500084240384 bytes&lt;/p&gt;
&lt;p&gt;The nice thing about using fdisk is that it automatically lists the size of
all block devices.&lt;/p&gt;
&lt;p&gt;Here is a list of my NAS, mentioned earlier.&lt;/p&gt;
&lt;p&gt;Beast:~# fdisk -l 2&amp;gt; /dev/null | grep Disk | grep -v identifier&lt;/p&gt;
&lt;p&gt;Disk /dev/sda: 60.0 GB, 60011642880 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdb: 60.0 GB, 60011642880 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sde: 1000.2 GB, 1000204886016 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/md0: 57.9 GB, 57996345344 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/md1: 2015 MB, 2015100928 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdg: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdh: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdi: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdj: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdk: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdl: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdm: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdn: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdo: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdp: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdq: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdr: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sds: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdt: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdu: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/sdv: 1000.1 GB, 1000123400192 bytes&lt;/p&gt;
&lt;p&gt;Disk /dev/md5: 18002.2 GB, 18002220023808 bytes&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>20 disk 18 TB RAID 6 storage based on Debian Linux</title><link href="https://louwrentius.com/20-disk-18-tb-raid-6-storage-based-on-debian-linux.html" rel="alternate"></link><published>2009-07-21T21:08:00+02:00</published><updated>2009-07-21T21:08:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-07-21:/20-disk-18-tb-raid-6-storage-based-on-debian-linux.html</id><summary type="html">&lt;h3&gt;This system is no longer operational and has been decomissioned (2017)&lt;/h3&gt;
&lt;p&gt;This is my NAS storage server based on Debian Linux. It uses software RAID and 20 one
terrabyte hard drives. It provides a total usable storage capacity of 18 terrabytes in a single RAID 6 array.&lt;/p&gt;
&lt;p&gt;One of the …&lt;/p&gt;</summary><content type="html">&lt;h3&gt;This system is no longer operational and has been decomissioned (2017)&lt;/h3&gt;
&lt;p&gt;This is my NAS storage server based on Debian Linux. It uses software RAID and 20 one
terrabyte hard drives. It provides a total usable storage capacity of 18 terrabytes in a single RAID 6 array.&lt;/p&gt;
&lt;p&gt;One of the remarkable side effects of using 20 drives within a single array is the read performance of over one gigabyte per second. &lt;/p&gt;
&lt;p&gt;&lt;a href="/static/images/norco05.jpg"&gt;&lt;img alt="norco nas" src="/static/images/norco05.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;table border="0" cellpadding="0" cellspacing="1" &gt;
&lt;tr&gt;&lt;td&gt;Case:&lt;/td&gt;&lt;td &gt;Norco RPC-4020&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Processor:&lt;/td&gt;&lt;td &gt;Core 2 duo E7400 @ 2.8GHz&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;RAM:&lt;/td&gt;&lt;td &gt;4 GB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Motherboard:&lt;/td&gt;&lt;td &gt; Asus P5Q-EM DO&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;LAN:&lt;/td&gt;&lt;td &gt;Intel Gigabit&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;PSU:&lt;/td&gt;&lt;td &gt;&lt;s&gt;Coolermaster 600 Watt&lt;/s&gt; Corsair CMPSU-750HX 750 Watt (Coolermaster died)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Controller: &lt;/td&gt;&lt;td &gt;HighPoint RocketRAID 2340 (16) and on-board controller (6).&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Disks:&lt;/td&gt;&lt;td &gt;20 x Samsung Spinpoint F1 (1 TB) and 2 x FUJITSU MHY2060BH (60 GB)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Arrays:&lt;/td&gt;&lt;td &gt;Boot: 2x 60 GB RAID 1 and storage: 20 x 1 TB RAID 6&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;RAID setup:&lt;/td&gt;&lt;td &gt;Linux software RAID using MDADM.&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;RAM:&lt;/td&gt;&lt;td &gt;4 GB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Read performance:&lt;/td&gt;&lt;td &gt;1.1 GB/s (yes this is correct, not a typo)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Write performance:&lt;/td&gt;&lt;td &gt;&lt;s&gt;350&lt;/s&gt; 450 MB/s. (suddenly faster after Debian update)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;OS:&lt;/td&gt;&lt;td &gt;Linux Debian Squeeze 64-bit&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Filesystem:&lt;/td&gt;&lt;td &gt;XFS (can handle &amp;gt; 16 TB partitions.)&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Rebuild time:&lt;/td&gt;&lt;td &gt;about 5 hours.&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;UPS:&lt;/td&gt;&lt;td &gt;Back-UPS RS 1200 LCD using Apcupsd&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;Idle power usage:&lt;/td&gt;&lt;td &gt;about &amp;nbsp;140 Watt&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;

&lt;p&gt;&lt;a href="/static/images/norco04.jpg"&gt;&lt;img alt="norco nas" src="/static/images/norco04.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="/static/images/lactower.jpg"&gt;&lt;img alt="setup" src="/static/images/lactower.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;object height="340" width="560"&gt;&lt;param name="movie" value="http://www.youtube.com/v/EGLaFiXHegw&amp;amp;hl=nl_NL&amp;amp;fs=1&amp;amp;"&gt;
&lt;param name="allowFullScreen" value="true"&gt;
&lt;param name="allowscriptaccess" value="always"&gt;
&lt;embed src="http://www.youtube.com/v/EGLaFiXHegw&amp;amp;hl=nl_NL&amp;amp;fs=1&amp;amp;" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="560" height="340"&gt;&lt;/embed&gt;&lt;/object&gt;</content><category term="Storage"></category><category term="norco"></category><category term="RPC-4020"></category><category term="Linux"></category><category term="RAID"></category><category term="6"></category><category term="MDADM"></category><category term="20"></category><category term="disk"></category><category term="18"></category><category term="TB"></category></entry><entry><title>HighPoint RocketRAID and staggered spinup with Samsung F1</title><link href="https://louwrentius.com/highpoint-rocketraid-and-staggered-spinup-with-samsung-f1.html" rel="alternate"></link><published>2009-07-21T20:13:00+02:00</published><updated>2009-07-21T20:13:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-07-21:/highpoint-rocketraid-and-staggered-spinup-with-samsung-f1.html</id><summary type="html">&lt;p&gt;I have experienced problems using a HighPoint RocketRaid 2320 and 2340 when
they are using 'staggered spinup' in combination with Samsung Spinpoint F1, 1
(one) terrabyte disks.&lt;/p&gt;
&lt;p&gt;The problem is that the F1 disks spinup very slowly and often seem to 'hang'
while making ticking noises that will scare any …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I have experienced problems using a HighPoint RocketRaid 2320 and 2340 when
they are using 'staggered spinup' in combination with Samsung Spinpoint F1, 1
(one) terrabyte disks.&lt;/p&gt;
&lt;p&gt;The problem is that the F1 disks spinup very slowly and often seem to 'hang'
while making ticking noises that will scare any computer user to death. When,
after ages, you have the luck that no disks keeps hanging during startup, the
first thing to do is to disable staggered spinup.&lt;/p&gt;
&lt;p&gt;However, if staggered spinup is not used and all disks will spinup together,
please note that you will need a strong PSU to handle the short peak load. For
example, starting up 20 (twenty) disks will briefly generate a load of 550
Watt on my APC ups, according to its display.&lt;/p&gt;
&lt;p&gt;I estimate that during startup, each disk is roughly consuming 20 Watt. Take
this all into account when deciding which type of disks and controllers you
will be using.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>8 TB RAID 6 Linux software RAID using EXT4</title><link href="https://louwrentius.com/8-tb-raid-6-linux-software-raid-using-ext4.html" rel="alternate"></link><published>2009-07-05T22:45:00+02:00</published><updated>2009-07-05T22:45:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-07-05:/8-tb-raid-6-linux-software-raid-using-ext4.html</id><summary type="html">&lt;p&gt;Mobo: Asus P5Q-EM DO ( 6x sata)&lt;/p&gt;
&lt;p&gt;CPU: Core 2 Duo E7400&lt;/p&gt;
&lt;p&gt;RAM: 4 GB&lt;/p&gt;
&lt;p&gt;Controller; HighPoint RocketRAID 2314 (16x poorten)&lt;/p&gt;
&lt;p&gt;PSU: CooMas Silent Pro M 600W ATX2 (definitely not overkill)&lt;/p&gt;
&lt;p&gt;HD: Samsung 1 TB SAT2 HD103UJ (10x)&lt;/p&gt;
&lt;p&gt;OS: Debian Lenny with custom kernel (backported)&lt;/p&gt;
&lt;p&gt;FS: EXT4&lt;/p&gt;
&lt;p&gt;RAID config: &lt;strong&gt;10 disk …&lt;/strong&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;Mobo: Asus P5Q-EM DO ( 6x sata)&lt;/p&gt;
&lt;p&gt;CPU: Core 2 Duo E7400&lt;/p&gt;
&lt;p&gt;RAM: 4 GB&lt;/p&gt;
&lt;p&gt;Controller; HighPoint RocketRAID 2314 (16x poorten)&lt;/p&gt;
&lt;p&gt;PSU: CooMas Silent Pro M 600W ATX2 (definitely not overkill)&lt;/p&gt;
&lt;p&gt;HD: Samsung 1 TB SAT2 HD103UJ (10x)&lt;/p&gt;
&lt;p&gt;OS: Debian Lenny with custom kernel (backported)&lt;/p&gt;
&lt;p&gt;FS: EXT4&lt;/p&gt;
&lt;p&gt;RAID config: &lt;strong&gt;10 disk RAID 6&lt;/strong&gt; based on Linux software RAID.&lt;/p&gt;
&lt;p&gt;Performance: &lt;strong&gt;Read: 850 MB/s Write: 300 MB/s&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Capacity: 7,5 GiB or 8 GB.&lt;/p&gt;
&lt;p&gt;I am very pleased with the result. Points that should be improved are the
temperature and noise. The PSU is a major overkill since this puppy draws less
than 200 watt.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://members.multiweb.nl/nan1/img/norco03.jpg"&gt;&lt;img alt="" src="http://members.multiweb.nl/nan1/img/norco03.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://members.multiweb.nl/nan1/img/norco02.jpg"&gt;&lt;img alt="" src="http://members.multiweb.nl/nan1/img/norco02.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Bunny:~# ./show-hdd-temp.sh&lt;/p&gt;
&lt;p&gt;Device  Temperature&lt;/p&gt;
&lt;p&gt;/dev/sda  32&lt;/p&gt;
&lt;p&gt;/dev/sdb  34&lt;/p&gt;
&lt;p&gt;/dev/sdc  33&lt;/p&gt;
&lt;p&gt;/dev/sdd  36&lt;/p&gt;
&lt;p&gt;/dev/sde  34&lt;/p&gt;
&lt;p&gt;/dev/sdf  33&lt;/p&gt;
&lt;p&gt;/dev/sdg  38&lt;/p&gt;
&lt;p&gt;/dev/sdh  39&lt;/p&gt;
&lt;p&gt;/dev/sdi  37&lt;/p&gt;
&lt;p&gt;/dev/sdj  36&lt;/p&gt;
&lt;p&gt;/dev/sdk  35&lt;/p&gt;
&lt;p&gt;Personalities : [raid6] [raid5] [raid4]&lt;/p&gt;
&lt;p&gt;md0 : active raid6 sdj[10] sdb[0] sdl[9] sdk[8] sdg[7] sdf[6] sdi[5] sdh[4]
sde[3] sdd&lt;a href="http://members.multiweb.nl/nan1/img/norco02.jpg"&gt;2&lt;/a&gt; sdc&lt;a href="http://members.multiweb.nl/nan1/img/norco03.jpg"&gt;1&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;7813463552 blocks super 0.91 level 6, 64k chunk, algorithm 2 [11/11]
[UUUUUUUUUUU]&lt;/p&gt;
&lt;p&gt;[======&amp;gt;..............] reshape = 30.1% (294904524/976682944) finish=585.3min
speed=19413K/sec&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Script that shows ETA of RAID rebuild / reshape</title><link href="https://louwrentius.com/script-that-shows-eta-of-raid-rebuild-reshape.html" rel="alternate"></link><published>2009-06-28T22:43:00+02:00</published><updated>2009-06-28T22:43:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-06-28:/script-that-shows-eta-of-raid-rebuild-reshape.html</id><summary type="html">&lt;p&gt;I made a small script that converts the output of &lt;code&gt;cat /proc/mdstat&lt;/code&gt; to an
actual date and time telling you when the RAID rebuild / reshape is finished.&lt;/p&gt;
&lt;p&gt;&lt;a href="/files/raid-rebuild-eta.tgz"&gt;This is the link to the correct version of the script.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;debian:~# ./raid-rebuild-eta.sh

Estimated time of finishing rebuild / reshape:
Mon …&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</summary><content type="html">&lt;p&gt;I made a small script that converts the output of &lt;code&gt;cat /proc/mdstat&lt;/code&gt; to an
actual date and time telling you when the RAID rebuild / reshape is finished.&lt;/p&gt;
&lt;p&gt;&lt;a href="/files/raid-rebuild-eta.tgz"&gt;This is the link to the correct version of the script.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;debian:~# ./raid-rebuild-eta.sh

Estimated time of finishing rebuild / reshape:
Mon Jun 29 00:45:07 CEST 2009
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content><category term="Storage"></category><category term="Uncategorized"></category></entry><entry><title>Linux RAID 6 performance using software RAID</title><link href="https://louwrentius.com/linux-raid-6-performance-using-software-raid.html" rel="alternate"></link><published>2009-06-28T19:32:00+02:00</published><updated>2009-06-28T19:32:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-06-28:/linux-raid-6-performance-using-software-raid.html</id><summary type="html">&lt;p&gt;So after toying around with RAID 0 just for fun, time to get serious. I
created a RAID 6 of 10 x 1 TB disks. This gives me raw device &lt;strong&gt;&lt;em&gt;read speeds
of 850 MB/s&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;write speeds of 300 MB/s&lt;/em&gt;&lt;/strong&gt;. I think this is exactly
what should …&lt;/p&gt;</summary><content type="html">&lt;p&gt;So after toying around with RAID 0 just for fun, time to get serious. I
created a RAID 6 of 10 x 1 TB disks. This gives me raw device &lt;strong&gt;&lt;em&gt;read speeds
of 850 MB/s&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;write speeds of 300 MB/s&lt;/em&gt;&lt;/strong&gt;. I think this is exactly
what should be expected, but boy it is damn fast. Especially the write speed
surprises me.&lt;/p&gt;
&lt;p&gt;Still, if I format the device using EXT4, write speed stays the same. However,
read speed drop to about 350 MB/s. I don't understand why. Maybe some
misalignment? I don't know.&lt;/p&gt;
&lt;p&gt;The issue with the dropped read speed is due to an incorrect chunk size of the
array. The array now consists of 20 drives, providing 18 TB of storage using
XFS. Read speads exceed 1.1 GB/s (that is not a typo) and write speeds are
about 350 MB/s.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>1.0 GB/s using Linux software RAID</title><link href="https://louwrentius.com/10-gbs-using-linux-software-raid.html" rel="alternate"></link><published>2009-06-26T23:58:00+02:00</published><updated>2009-06-26T23:58:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-06-26:/10-gbs-using-linux-software-raid.html</id><summary type="html">&lt;blockquote&gt;
&lt;p&gt;I filled the Norco case with hardware. It is now up and running, based on
Debian Linux (Lenny).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I immediately performed some initial tests with software RAID 0. The results
are just astounding.&lt;/p&gt;
&lt;p&gt;debian:~# dd if=/dev/md0 of=/dev/null bs=1M count=50000&lt;/p&gt;
&lt;p&gt;50000+0 records in&lt;/p&gt;
&lt;p&gt;50000 …&lt;/p&gt;</summary><content type="html">&lt;blockquote&gt;
&lt;p&gt;I filled the Norco case with hardware. It is now up and running, based on
Debian Linux (Lenny).&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I immediately performed some initial tests with software RAID 0. The results
are just astounding.&lt;/p&gt;
&lt;p&gt;debian:~# dd if=/dev/md0 of=/dev/null bs=1M count=50000&lt;/p&gt;
&lt;p&gt;50000+0 records in&lt;/p&gt;
&lt;p&gt;50000+0 records out&lt;/p&gt;
&lt;p&gt;52428800000 bytes (52 GB) copied, 50.9222 s, 1.0 GB/s&lt;/p&gt;
&lt;p&gt;This is correct. &lt;strong&gt;1 Gigabyte per second&lt;/strong&gt; using 10 Samsung Spinpoint F1's
connected to the motherboard (4) and to the Highpoint Rocketraid 2340 (6).&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Got myself a Norco RPC-4020</title><link href="https://louwrentius.com/got-myself-a-norco-rpc-4020.html" rel="alternate"></link><published>2009-06-16T19:47:00+02:00</published><updated>2009-06-16T19:47:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-06-16:/got-myself-a-norco-rpc-4020.html</id><summary type="html">&lt;p&gt;I've got this fetish for storage. So I bought a case that gives me some room
for future expantion. The current 6 TB RAID 6 storage server does not have any
room for expantion.&lt;/p&gt;
&lt;p&gt;This Norco RPC-4020 case with 20 hot swap drive bays does however. Don't know
what to …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I've got this fetish for storage. So I bought a case that gives me some room
for future expantion. The current 6 TB RAID 6 storage server does not have any
room for expantion.&lt;/p&gt;
&lt;p&gt;This Norco RPC-4020 case with 20 hot swap drive bays does however. Don't know
what to fill it with yet.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://members.multiweb.nl/nan1/img/norco4020.jpg"&gt;&lt;img alt="" src="http://members.multiweb.nl/nan1/img/norco4020.jpg" /&gt;&lt;/a&gt;&lt;/p&gt;</content><category term="Hardware"></category><category term="Uncategorized"></category></entry><entry><title>Bash Shell Function Library (BSFL) released.</title><link href="https://louwrentius.com/bash-shell-function-library-bsfl-released.html" rel="alternate"></link><published>2009-05-31T20:22:00+02:00</published><updated>2009-05-31T20:22:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-05-31:/bash-shell-function-library-bsfl-released.html</id><summary type="html">&lt;p&gt;The Bash Shell Function Library (BSFL) is a small Bash script that acts as a
library for bash scripts. It provides a couple of functions that makes the
lives of most people using shell scripts a bit easier.&lt;a href="http://members.multiweb.nl/nan1/img/bsfl1.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://members.multiweb.nl/nan1/img/bsfl2.png"&gt;&lt;img alt="" src="http://members.multiweb.nl/nan1/img/bsfl2.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The purpose of this library is to provide pre-build functions for actions …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The Bash Shell Function Library (BSFL) is a small Bash script that acts as a
library for bash scripts. It provides a couple of functions that makes the
lives of most people using shell scripts a bit easier.&lt;a href="http://members.multiweb.nl/nan1/img/bsfl1.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="http://members.multiweb.nl/nan1/img/bsfl2.png"&gt;&lt;img alt="" src="http://members.multiweb.nl/nan1/img/bsfl2.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The purpose of this library is to provide pre-build functions for actions that
often need to be performed, focussing on error-checking and logging. In this
example, a test-script is written that demonstrates the functions the library
provides.&lt;/p&gt;
&lt;p&gt;The project can be found at &lt;a href="http://code.google.com/p/bsfl/"&gt;this location.&lt;/a&gt;&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>'Tip of the day for every Linux or Unix user: brace expantion'</title><link href="https://louwrentius.com/tip-of-the-day-for-every-linux-or-unix-user-brace-expantion.html" rel="alternate"></link><published>2009-05-02T22:24:00+02:00</published><updated>2009-05-02T22:24:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-05-02:/tip-of-the-day-for-every-linux-or-unix-user-brace-expantion.html</id><summary type="html">&lt;p&gt;Searching the web I discovered some really nice feature of the unix shell,
which I didn't know about.&lt;/p&gt;
&lt;p&gt;Try this:&lt;/p&gt;
&lt;p&gt;touch foobar.conf&lt;/p&gt;
&lt;p&gt;Now try this:&lt;/p&gt;
&lt;p&gt;cp foobar.conf{,.bak}&lt;/p&gt;
&lt;p&gt;It is equivalent to:&lt;/p&gt;
&lt;p&gt;cp foobar.conf foobar.conf.bak&lt;/p&gt;
&lt;p&gt;This is also the easiest way to create sequences. Do …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Searching the web I discovered some really nice feature of the unix shell,
which I didn't know about.&lt;/p&gt;
&lt;p&gt;Try this:&lt;/p&gt;
&lt;p&gt;touch foobar.conf&lt;/p&gt;
&lt;p&gt;Now try this:&lt;/p&gt;
&lt;p&gt;cp foobar.conf{,.bak}&lt;/p&gt;
&lt;p&gt;It is equivalent to:&lt;/p&gt;
&lt;p&gt;cp foobar.conf foobar.conf.bak&lt;/p&gt;
&lt;p&gt;This is also the easiest way to create sequences. Do not use 'seq' since you
cannot rely on it being installed.&lt;/p&gt;
&lt;p&gt;bash-3.2$ echo {1..10}&lt;/p&gt;
&lt;p&gt;1 2 3 4 5 6 7 8 9 10&lt;/p&gt;
&lt;p&gt;Please visit &lt;a href="http://f241vc15.com/2008/02/29/doing-cool-things-in-bash-and-in-linux/"&gt;this location&lt;/a&gt; for additional examples.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>The Dirtiest Computer In The World</title><link href="https://louwrentius.com/the-dirtiest-computer-in-the-world.html" rel="alternate"></link><published>2009-04-30T18:20:00+02:00</published><updated>2009-04-30T18:20:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-04-30:/the-dirtiest-computer-in-the-world.html</id><summary type="html">&lt;p&gt;I helped a family the other day with a malfunctioning computer. The system had
a tendency to shutdown at random during games.&lt;/p&gt;
&lt;p&gt;Hmm...&lt;/p&gt;
&lt;p&gt;It didn't take long to discover why.&lt;/p&gt;
&lt;p&gt;&lt;img alt="" src="http://members.multiweb.nl/nan1/img/dirty/Thumbnails/10.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I think this is probably one of the filthiest computer in the world.&lt;/p&gt;
&lt;p&gt;More photos at &lt;a href="http://members.multiweb.nl/nan1/img/dirty/"&gt;http://members.multiweb.nl …&lt;/a&gt;&lt;/p&gt;</summary><content type="html">&lt;p&gt;I helped a family the other day with a malfunctioning computer. The system had
a tendency to shutdown at random during games.&lt;/p&gt;
&lt;p&gt;Hmm...&lt;/p&gt;
&lt;p&gt;It didn't take long to discover why.&lt;/p&gt;
&lt;p&gt;&lt;img alt="" src="http://members.multiweb.nl/nan1/img/dirty/Thumbnails/10.jpg" /&gt;&lt;/p&gt;
&lt;p&gt;I think this is probably one of the filthiest computer in the world.&lt;/p&gt;
&lt;p&gt;More photos at &lt;a href="http://members.multiweb.nl/nan1/img/dirty/"&gt;http://members.multiweb.nl/nan1/img/dirty/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Smoking doesn't only kill people, it seems.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Compatibility Highpoint RocketRAID 2320 and Samsung Spinpoint F1</title><link href="https://louwrentius.com/compatibility-highpoint-rocketraid-2320-and-samsung-spinpoint-f1.html" rel="alternate"></link><published>2009-04-26T22:38:00+02:00</published><updated>2009-04-26T22:38:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-04-26:/compatibility-highpoint-rocketraid-2320-and-samsung-spinpoint-f1.html</id><summary type="html">&lt;p&gt;There are some reports about incompatibility between RAID controllers and
Samsung Spinpoint F1 drives. I have no troubles with my 0.5 and 1.0 TB drives
from Samsung using mentioned controller. See below:&lt;/p&gt;
&lt;p&gt;Controller 1: RocketRAID 232x SATA Controller&lt;/p&gt;
&lt;p&gt;1/1/1 SAMSUNG HD103UJ 1000123MB, Normal&lt;/p&gt;
&lt;p&gt;1/2/1 …&lt;/p&gt;</summary><content type="html">&lt;p&gt;There are some reports about incompatibility between RAID controllers and
Samsung Spinpoint F1 drives. I have no troubles with my 0.5 and 1.0 TB drives
from Samsung using mentioned controller. See below:&lt;/p&gt;
&lt;p&gt;Controller 1: RocketRAID 232x SATA Controller&lt;/p&gt;
&lt;p&gt;1/1/1 SAMSUNG HD103UJ 1000123MB, Normal&lt;/p&gt;
&lt;p&gt;1/2/1 SAMSUNG HD103UJ 1000123MB, Normal&lt;/p&gt;
&lt;p&gt;1/3/1 SAMSUNG HD103UJ 1000123MB, Normal&lt;/p&gt;
&lt;p&gt;1/4/1 SAMSUNG HD103UJ 1000123MB, Normal&lt;/p&gt;
&lt;p&gt;1/5/1 SAMSUNG HD501LJ 500028MB, Normal&lt;/p&gt;
&lt;p&gt;1/6/1 SAMSUNG HD501LJ 500028MB, Normal&lt;/p&gt;
&lt;p&gt;1/7/1 SAMSUNG HD501LJ 500028MB, Normal&lt;/p&gt;
&lt;p&gt;1/8/1 SAMSUNG HD501LJ 500028MB, Normal&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Automated install of Debian Linux based on PXE net booting</title><link href="https://louwrentius.com/automated-install-of-debian-linux-based-on-pxe-net-booting.html" rel="alternate"></link><published>2009-04-25T16:30:00+02:00</published><updated>2009-04-25T16:30:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-04-25:/automated-install-of-debian-linux-based-on-pxe-net-booting.html</id><summary type="html">&lt;p&gt;Every honest and good system administrator is continue bussy with automating
his work. For two reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Repeating the same task over and over again is friggin boring. A system
administrator has better things to do, such as drinking coffee.&lt;/li&gt;
&lt;li&gt;Humans make mistakes, especially if boring. Computers do not.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If a …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Every honest and good system administrator is continue bussy with automating
his work. For two reasons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Repeating the same task over and over again is friggin boring. A system
administrator has better things to do, such as drinking coffee.&lt;/li&gt;
&lt;li&gt;Humans make mistakes, especially if boring. Computers do not.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If a computer can do a certain job, it wil do it always faster and better than
a human. Automating system installation is both more time efficient and allows
you to deliver a constant quality.&lt;/p&gt;
&lt;h3&gt;Netbooting or PXE booting&lt;/h3&gt;
&lt;p&gt;Regarding the installation of hosts, the holy grail of automated installation
is netbooting or PXE booting. Almost every system today contains a network
interface card that supports booting over the network. A system obtains
instructions from the local DHCP server where to obtain an operating system
kernel. This kernel is obtained using TFTP and then loaded. From then on, the
operating system takes over and the installation continues, for example based
on Debian preseeding and/or FAI.&lt;/p&gt;
&lt;h3&gt;How to prepare for netbooting&lt;/h3&gt;
&lt;p&gt;The following requirements must be met:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;a DHCP server must be available&lt;/li&gt;
&lt;li&gt;a TFTP server must be avaialble&lt;/li&gt;
&lt;li&gt;the correct files for netbooting must be in place&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Configuring the DHCP server&lt;/h3&gt;
&lt;p&gt;The following two lines must be added to the 'subnet' section of your DHCP
server configuration.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nv"&gt;filename&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;pxelinux.0&amp;quot;&lt;/span&gt;&lt;span class="c1"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;next&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nv"&gt;server&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;.&lt;span class="mi"&gt;0&lt;/span&gt;.&lt;span class="mi"&gt;0&lt;/span&gt;.&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="c1"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The 'next-server' section specifies the IP-address of the system that is
running the TFTP server, so change it based on your configuration, this is
just an example.&lt;/p&gt;
&lt;p&gt;Don't forget to restart the DHCP server daemon.&lt;/p&gt;
&lt;h3&gt;Configuring the TFTP server&lt;/h3&gt;
&lt;p&gt;First, make sure you install "tftpd-hpa" since the standard "tftpd" server
does not seem to support the "tsize" option. Then, edit /etc/defaults/tftpd-
hpa like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="n"&gt;RUN_DAEMON&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;yes&amp;quot;&lt;/span&gt;
&lt;span class="n"&gt;OPTIONS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;-l -a -R 30000:30100 -s /var/lib/tftpboot&amp;quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Do not run the TFTP server from inetd because the above lines provide more
control over how the server behaves, especially in regard to firewalls.&lt;/p&gt;
&lt;p&gt;The -R option specifies the port-range used for data transfers. This port
range should also be configured within your firewall configuration. Watch out!
Do not allow TFTP access from the Internet. TFTP requires NO authentication
and is very insecure.&lt;/p&gt;
&lt;p&gt;Start the TFTPD server with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;/etc/init.d/tftpd-hpa start
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Install the files required for netbooting&lt;/h3&gt;
&lt;p&gt;The fun thing is that Debian provides a complete package for netbooting. So cd
to /var/lib/tftpboot and enter:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;wget http://ftp.debian.org/debian/dists/lenny/main/installer-i386/current
     /imag es/netboot/netboot.tar.gz
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then extract the contents of netboot.tar.tz like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;tar xzf netboot.tar.gz
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That is all there is to it. If you start a host and make it boot using PXE, it
will show you the regular installation menu that is also shown when a system
is booted from a regular Debian installation CD-ROM.&lt;/p&gt;
&lt;p&gt;However, if you want automated installation and not use this boot menu, first
cd to:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;var&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;lib&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;tftpboot&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;debian&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;installer&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;i386&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;boot&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;screens&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then edit syslinux.cfg and comment this rule out:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;default debian-installer/i386/boot-screens/vesamenu.c32
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If you want to use preseeding, first edit adtxt.cfg and goto label auto. Edit
label auto like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;auto&lt;/span&gt;
&lt;span class="nx"&gt;menu&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;^&lt;/span&gt;&lt;span class="nx"&gt;Automated&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;
&lt;span class="nx"&gt;kernel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;debian&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;installer&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;i386&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;linux&lt;/span&gt;
&lt;span class="nx"&gt;append&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;priority&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;critical&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;vga&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;normal&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;initrd&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;debian&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;installer&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;i386&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;initrd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;gz&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//(IP-address)/preseed/preseed.cfg -- quiet&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The IP-address section should point towards the preseed server that is hosting
the preseed configuration file.&lt;/p&gt;
&lt;p&gt;Last, edit txt.cfg. Change 'default install' to:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;default auto
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;I always though that PXE booting was a pain to setup. However, I got it
working within 60 minutes &lt;a href="http://www.debian-administration.org/articles/478"&gt;using this howto&lt;/a&gt;.&lt;/p&gt;</content><category term="Linux"></category><category term="Uncategorized"></category></entry><entry><title>FFmpeg performance on a Core i7 920 @ 3.6 Ghz</title><link href="https://louwrentius.com/ffmpeg-performance-on-a-core-i7-920-36-ghz.html" rel="alternate"></link><published>2009-04-22T23:10:00+02:00</published><updated>2009-04-22T23:10:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-04-22:/ffmpeg-performance-on-a-core-i7-920-36-ghz.html</id><summary type="html">&lt;p&gt;The system i'm running is a Core i7 920 @ 3.6 Ghz.&lt;/p&gt;
&lt;p&gt;I am transcoding a DVD (Grave Of The Fire Flies) to iPod format (640x480
x264).&lt;/p&gt;
&lt;p&gt;Thread support is enabled, to FFmpeg uses about 250% CPU. That's 2.5 of the 4
cores available. If possible, I would have …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The system i'm running is a Core i7 920 @ 3.6 Ghz.&lt;/p&gt;
&lt;p&gt;I am transcoding a DVD (Grave Of The Fire Flies) to iPod format (640x480
x264).&lt;/p&gt;
&lt;p&gt;Thread support is enabled, to FFmpeg uses about 250% CPU. That's 2.5 of the 4
cores available. If possible, I would have liked to see it use all four to the
max.&lt;/p&gt;
&lt;p&gt;Any way. I use these settings:&lt;/p&gt;
&lt;p&gt;FFmpeg version SVN-r18628&lt;/p&gt;
&lt;p&gt;ffmpeg -i $1 -pass 1 -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -vpre
normal -vpre ipod640 -s 640x480 -b 512k -bt 512k -threads 0 -f mp4 $2&lt;/p&gt;
&lt;p&gt;With these settings, I get an encoding speed of fps=193.&lt;/p&gt;
&lt;p&gt;I don't know how that stacks up against other systems. It seems fast to me
though. In effect, the system is playing the DVD at 7.6 times the speed of the
movie (25 fps?). So encoding is 88 minutes / 7.6 = 11.45 minutes for encoding
a DVD to iPod x264.&lt;/p&gt;</content><category term="Hardware"></category><category term="Uncategorized"></category></entry><entry><title>How to escape file names in bash shell scripts</title><link href="https://louwrentius.com/how-to-escape-file-names-in-bash-shell-scripts.html" rel="alternate"></link><published>2009-03-29T21:49:00+02:00</published><updated>2009-03-29T21:49:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-03-29:/how-to-escape-file-names-in-bash-shell-scripts.html</id><summary type="html">&lt;p&gt;After fighting with Bash for quite some time, I found out that the following
code provides a nice basis for escaping special characters. Ofcource it is not
complete, but the most important characters are filtered.&lt;/p&gt;
&lt;p&gt;If anybody has a better solution, please let me know. It works and it is …&lt;/p&gt;</summary><content type="html">&lt;p&gt;After fighting with Bash for quite some time, I found out that the following
code provides a nice basis for escaping special characters. Ofcource it is not
complete, but the most important characters are filtered.&lt;/p&gt;
&lt;p&gt;If anybody has a better solution, please let me know. It works and it is
readable but not pretty.&lt;/p&gt;
&lt;blockquote&gt;&lt;/blockquote&gt;
&lt;p&gt;FILE_ESCAPED=`echo "$FILE" | \&lt;/p&gt;
&lt;p&gt;sed s/\ /\\\\\\\ /g | \&lt;/p&gt;
&lt;p&gt;sed s/\'/\\\\\\\'/g | \&lt;/p&gt;
&lt;p&gt;sed s/\&amp;amp;/\\\\\\\&amp;amp;/g | \&lt;/p&gt;
&lt;p&gt;sed s/\;/\\\\\\\;/g | \&lt;/p&gt;
&lt;p&gt;sed s/(/\\\\\(/g | \&lt;/p&gt;
&lt;p&gt;sed s/)/\\\\\)/g `&lt;/p&gt;</content><category term="Linux"></category><category term="Uncategorized"></category></entry><entry><title>Distributed Parallel Processing Shell Script (PPSS) released</title><link href="https://louwrentius.com/distributed-parallel-processing-shell-script-ppss-released.html" rel="alternate"></link><published>2009-03-12T22:05:00+01:00</published><updated>2009-03-12T22:05:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-03-12:/distributed-parallel-processing-shell-script-ppss-released.html</id><summary type="html">&lt;p&gt;I'd like to announce the release of the distributed version of the Parallel
Processing Shell Script (&lt;a href="http://code.google.com/p/ppss"&gt;PPSS&lt;/a&gt;). PPSS is a bash script that allows you to
run commands in parallel. It is written to make use of current multi-core
CPUs.&lt;/p&gt;
&lt;p&gt;The new distributed version of PPSS allows you to run …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I'd like to announce the release of the distributed version of the Parallel
Processing Shell Script (&lt;a href="http://code.google.com/p/ppss"&gt;PPSS&lt;/a&gt;). PPSS is a bash script that allows you to
run commands in parallel. It is written to make use of current multi-core
CPUs.&lt;/p&gt;
&lt;p&gt;The new distributed version of PPSS allows you to run PPSS on multiple hosts,
that simultaneously work on a single group of files or other items. A central
server is used for file locking. This way, nodes know which files are 'locked'
by other nodes and/or have already been processed.&lt;/p&gt;
&lt;p&gt;PPSS is written to be very easy to use. You can be up and running on a single
host within 5 minutes. Distributed usage requires the (one-time) setup of some
SSH accounts on your nodes and server, but that's about it. Please take a look
at distributed PPSS for yourself at:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://code.google.com/p/ppss"&gt;http://code.google.com/p/ppss&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I used distributed PPSS to encode 400 GB of WAV files to MP3 using 4 computer
systems. A total of 14 cores where available to PPSS to encode those files
using Lame. However, PPSS is not limited to encoding WAV files. To the
contrary, it is written to execute whatever command you want to execute. Gzip
a bunch of files in parallel? No problem. It is totally up to your
imagination.&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Core i7 920 @ 3,6 Ghz is a true beast!</title><link href="https://louwrentius.com/core-i7-920-36-ghz-is-a-true-beast.html" rel="alternate"></link><published>2009-03-04T21:19:00+01:00</published><updated>2009-03-04T21:19:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-03-04:/core-i7-920-36-ghz-is-a-true-beast.html</id><summary type="html">&lt;p&gt;Even today, Core 2 Duo processors clocked at 2 ghz are no slugs. However, the
Core i7 920 is of a different kind. First, it is not only clocked at a higher
speed (default 2,8 Ghz), it is also a quad-core processor. Thanks to the re-
introduction of hyperthreading …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Even today, Core 2 Duo processors clocked at 2 ghz are no slugs. However, the
Core i7 920 is of a different kind. First, it is not only clocked at a higher
speed (default 2,8 Ghz), it is also a quad-core processor. Thanks to the re-
introduction of hyperthreading, this processor can handle 8 parallel proceses
simultaneously.&lt;/p&gt;
&lt;p&gt;Just how fast a Core i7 can be, especially if overclocked to 3,6 Ghz shows
this diagram:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://chart.apis.google.com/chart?cht=p3&amp;amp;chd=t:66,11,11,12&amp;amp;chs=350x150&amp;amp;chl=Core%207i%20|AMD|iMac|Mac%20Mini&amp;amp;noncense=test.png"&gt;&lt;img alt="" src="http://chart.apis.google.com/chart?cht=p3&amp;amp;chd=t:66,11,11,12&amp;amp;chs=350x150&amp;amp;chl=Core%207i%20|AMD|iMac|Mac%20Mini&amp;amp;noncense=test.png" /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Using my still in development version of PPSS, four systems processed 400 GB
of WAV files and converted them to MP3. This simple pie-chart shows that the
Core i7 on it's own, using 8 parallel processes, managed to process 2/3 of the
files. The Core i7 was way faster than the other 3 systems combined! This is
marvelous, I think. And it seems all due to Hyperthreading. If an additional
duo core system would have been added, the other systems combined would have
also 8 parallel threads available and would have processed roughly 50% of the
items. However, please note that the Core i7 is a quad core processor and has
'only' four physical cores...&lt;/p&gt;</content><category term="Hardware"></category><category term="Uncategorized"></category></entry><entry><title>'Linux: unattended installation with Debian preseeding'</title><link href="https://louwrentius.com/linux-unattended-installation-with-debian-preseeding.html" rel="alternate"></link><published>2009-02-22T19:38:00+01:00</published><updated>2009-02-22T19:38:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-02-22:/linux-unattended-installation-with-debian-preseeding.html</id><summary type="html">&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Debian Linux provides a mechanism to install the operating
system without user intervention. This mechanism is called 'preseeding' and is
similar to Red Hat Kick Start and Sun Solaris Jump Start.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;The basic idea is that the installer is fed a recipe, according to which the
system is installed. This …&lt;/p&gt;</summary><content type="html">&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Debian Linux provides a mechanism to install the operating
system without user intervention. This mechanism is called 'preseeding' and is
similar to Red Hat Kick Start and Sun Solaris Jump Start.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;/blockquote&gt;
&lt;p&gt;The basic idea is that the installer is fed a recipe, according to which the
system is installed. This recipe can be fed by a floppy, usb stick, cdrom, or
through a web server over the network. To use such a recipe, just boot from a
Debian CD-rom and issue the following command:&lt;/p&gt;
&lt;p&gt;Floppy based: (you really shouldn't be using those anymore) &lt;/p&gt;
&lt;p&gt;Boot: auto file=/floppy/preseed.cfg&lt;/p&gt;
&lt;p&gt;USB stick based:&lt;/p&gt;
&lt;p&gt;Boot: auto file=/hd-media/preseed.cfg&lt;/p&gt;
&lt;p&gt;Network based: &lt;/p&gt;
&lt;p&gt;Boot: auto url=http://internal.web.server.com/preseed.cfg&lt;/p&gt;
&lt;p&gt;The only work you have to do is to create a preseed configuration file. This
is really simple, since preseeding is well-documented and preseed
configuration files are easy to understand.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;d-i  debian-installer/country  string US &lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;d-i  debian-installer/locale  string en_US.UTF-8 &lt;/p&gt;
&lt;p&gt;d-i  mirror/country  string manual &lt;/p&gt;
&lt;p&gt;d-i  mirror/http/hostname  string ftp.uk.debian.org &lt;/p&gt;
&lt;p&gt;d-i  mirror/http/directory  string /debian &lt;/p&gt;
&lt;p&gt;base-config  apt-setup/hostname  string ftp.uk.debian.org &lt;/p&gt;
&lt;p&gt;base-config  apt-setup/directory  string /debian&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;As you can see, it is just a text-based file that configures some variables
that are used during installation. It is basically an answer file. Questions
that are asked by the installer during installation are answered with the
preseed file.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;For a full example, take a look &lt;a href="http://hands.com/d-i/lenny/preseed.cfg"&gt;here.&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;Very extensive documentation can be found &lt;a href="http://d-i.alioth.debian.org/manual/en.i386/apb.html"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A  minimal debian installation without support for X can be installed within
2.5 minutes, assuming a network-based installation (tested in VMware
Workstation). &lt;/p&gt;
&lt;p&gt;Please note that if your company uses Debian Linux not only for servers but
also for desktops / laptops, preseeding is an ideal solution to provide your
users with a new and fresh installation whenever they want. Users or sysadmins
shouldn't be bussy manually installing these systems. &lt;/p&gt;
&lt;p&gt;I have implemented Debian Preseeding to create a fully unattended and
automated installation of laptops, based on &lt;a href="http://en.wikipedia.org/wiki/Linux_Unified_Key_Setup"&gt;LUKS full disk encryption&lt;/a&gt;,
which is supported by the Debian installer (!), with all required software
installed. All additional software is installed with a custom installation
framework based on shell-scripts. The installation framework makes sure that
if anything goes wrong during installation, it is noticed. &lt;/p&gt;
&lt;p&gt;Unattended installation allows system administrators to quickly deploy new
installations and guarantee that such installations are 100% correct. They
rule out the human factor, which tends to introduce random errors. So take a
look at Debian Preseeding and decide for yourself how useful it is.&lt;/p&gt;</content><category term="Linux"></category><category term="Uncategorized"></category></entry><entry><title>Why Debian/Ubuntu Linux is to be preferred</title><link href="https://louwrentius.com/why-debianubuntu-linux-is-to-be-preferred.html" rel="alternate"></link><published>2009-02-16T19:38:00+01:00</published><updated>2009-02-16T19:38:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-02-16:/why-debianubuntu-linux-is-to-be-preferred.html</id><summary type="html">&lt;p&gt;There are many Linux distributions around. However, I always come back to just
one: Debian. The reason why so many people use Debian is the same reason I
like it so much: software management.  With good old apt-get or the new
aptitude, software is installed within minutes. Due to the …&lt;/p&gt;</summary><content type="html">&lt;p&gt;There are many Linux distributions around. However, I always come back to just
one: Debian. The reason why so many people use Debian is the same reason I
like it so much: software management.  With good old apt-get or the new
aptitude, software is installed within minutes. Due to the vast amount of
software available even the most obscure software can be installed without
resorting to manually downloading and compiling.&lt;/p&gt;
&lt;p&gt;But the most important aspect of Debian is it's mantra of stability. It is
build for servers. For people who don't want to take risks and prefer
stability and security above anything else. This is also the main gripe most
people have about Debian: it is often not very up-to-date regarding drivers or
the latest software versions.  If that is a problem, there is still the
possibility to run the testing branch of Debian, exchanging the risk on things
getting broken or unstable for the availability of newer software.&lt;/p&gt;
&lt;p&gt;As a part-time system administrator, one of the most ideal components of
Debian is its installer. Especially the "preseeding" bit. Preseeding is for
Debian what Kickstart is for Red Hat and Jump start is for Sun Solaris. It
allows a full unattended installation of Debian Linux on any
hardware without ever touching your keyboard. This isn't new, but it is much
more user friendly as opposed to, for example, kick start. &lt;/p&gt;
&lt;p&gt;Debian Preseeding is very well documented and can easily be extended to run
your own scripts after installation for some post-configuration.&lt;/p&gt;
&lt;p&gt;I currently use it to install hosts by booting them with an USB stick and
using a network install. Not only are network installs often the fastest
solution, assuming that a local Debian mirror is available, the system is also
direct up-to-date. &lt;/p&gt;
&lt;p&gt;Abount preseeding:&lt;/p&gt;
&lt;p&gt;[http://d-i.alioth.debian.org/manual/en.i386/apb.html&lt;/p&gt;
&lt;p&gt;]&lt;a href="http://d-i.alioth.debian.org/manual/en.i386/apb.html"&gt;1&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[http://wiki.debian.org/DebianInstaller/Preseed&lt;/p&gt;
&lt;p&gt;]&lt;a href="http://wiki.debian.org/DebianInstaller/Preseed"&gt;2&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;About setting up a local Debian mirror (requires about 50 GB of storage space
on a web server) &lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.howtoforge.com/local_debian_ubuntu_mirror"&gt;http://www.howtoforge.com/local_debian_ubuntu_mirror&lt;/a&gt;&lt;/p&gt;</content><category term="Linux"></category><category term="Uncategorized"></category></entry><entry><title>Why I still won't switch to Linux and keep my Mac</title><link href="https://louwrentius.com/why-i-still-wont-switch-to-linux-and-keep-my-mac.html" rel="alternate"></link><published>2009-02-01T22:49:00+01:00</published><updated>2009-02-01T22:49:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-02-01:/why-i-still-wont-switch-to-linux-and-keep-my-mac.html</id><summary type="html">&lt;p&gt;The current state of Linux is amazing. If we take a look at, for example,
Ubuntu Linux, we have to admit that the Linux desktop is really becoming a
nice, user-friendly environment. I'm truly starting to like what I see. I
considered whiping Mac OS X from my macbook, but …&lt;/p&gt;</summary><content type="html">&lt;p&gt;The current state of Linux is amazing. If we take a look at, for example,
Ubuntu Linux, we have to admit that the Linux desktop is really becoming a
nice, user-friendly environment. I'm truly starting to like what I see. I
considered whiping Mac OS X from my macbook, but there are some reasons why I
won't. &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;I like to mess around in Adobe Photoshop now and then. There is no serious
Linux alternative.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I've got an iPhone, so now i'm 'stuck' with iTunes. I really like my iPhone
btw.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I really like iPhoto, it is very easy to use and archive my photo's.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;My entire music collection is in iTunes -&amp;gt; especially the song ratings are
important.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Maybe those reasons can be overcome. But for day-to-day usage, I still prefer
Mac OS X above Linux. Especially software like iPhoto makes life so much easier. 
The hardware comes at a premium but I think it is worth it.&lt;/p&gt;</content><category term="Linux"></category><category term="Uncategorized"></category></entry><entry><title>Unattended automatic installation of Linux nvidia binary driver</title><link href="https://louwrentius.com/unattended-automatic-installation-of-linux-nvidia-binary-driver.html" rel="alternate"></link><published>2009-01-13T10:35:00+01:00</published><updated>2009-01-13T10:35:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-01-13:/unattended-automatic-installation-of-linux-nvidia-binary-driver.html</id><summary type="html">&lt;p&gt;As part of an unattended installation, it was necessary to install a binary
nvidia graphics driver. This is a manual proces by default. However, it can be
done fully automatic:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;prerequisite:&lt;/em&gt; install xserver-xorg-dev package or similar xorg development
package.&lt;/p&gt;
&lt;p&gt;sh NVIDIA-.run -q -a -n -X -s&lt;/p&gt;
&lt;p&gt;That's all there …&lt;/p&gt;</summary><content type="html">&lt;p&gt;As part of an unattended installation, it was necessary to install a binary
nvidia graphics driver. This is a manual proces by default. However, it can be
done fully automatic:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;prerequisite:&lt;/em&gt; install xserver-xorg-dev package or similar xorg development
package.&lt;/p&gt;
&lt;p&gt;sh NVIDIA-.run -q -a -n -X -s&lt;/p&gt;
&lt;p&gt;That's all there is to it. The -q option means quiet, the -a option means
accept licence, the -n action suppresses questions, the -X option updates the
xorg.conf file and the -s option disables the ncurses interface.&lt;/p&gt;</content><category term="Linux"></category><category term="Uncategorized"></category></entry><entry><title>Release of PPSS - the Parallel Processing Shell Script</title><link href="https://louwrentius.com/release-of-ppss-the-parallel-processing-shell-script.html" rel="alternate"></link><published>2009-01-03T01:51:00+01:00</published><updated>2009-01-03T01:51:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2009-01-03:/release-of-ppss-the-parallel-processing-shell-script.html</id><summary type="html">&lt;p&gt;PPSS is a shell script that processess files or other items in parallel. It is
designed to make use of the current multi-core CPUs. It will detect the number
of available CPUs and start a thread for each CPU core. &lt;/p&gt;
&lt;p&gt;This script is build with the goal to be very …&lt;/p&gt;</summary><content type="html">&lt;p&gt;PPSS is a shell script that processess files or other items in parallel. It is
designed to make use of the current multi-core CPUs. It will detect the number
of available CPUs and start a thread for each CPU core. &lt;/p&gt;
&lt;p&gt;This script is build with the goal to be very easy to use. Also, it must be
robust (atomic), and portable. It employs a locking mechanism and runs on
Linux and Mac OS X. It should work on other Unix-like operating system that
support the bash shell.&lt;/p&gt;
&lt;p&gt;Please visit &lt;a href="http://code.google.com/p/ppss/"&gt;http://code.google.com/p/ppss/&lt;/a&gt; for a download.&lt;/p&gt;</content><category term="Projects"></category><category term="Uncategorized"></category></entry><entry><title>Benefits of hyper-threading on a Core 7i processor</title><link href="https://louwrentius.com/benefits-of-hyper-threading-on-a-core-7i-processor.html" rel="alternate"></link><published>2008-12-28T00:26:00+01:00</published><updated>2008-12-28T00:26:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2008-12-28:/benefits-of-hyper-threading-on-a-core-7i-processor.html</id><summary type="html">&lt;p&gt;I had some wave files that I wanted to encode to mp3. I wrote a small parallel
processing framework for this job. It executes parallel jobs so I can benefit
from the 4 cores + 4 virtual cores of my new Core 7i 920 processor. &lt;/p&gt;
&lt;p&gt;The framework is based on some …&lt;/p&gt;</summary><content type="html">&lt;p&gt;I had some wave files that I wanted to encode to mp3. I wrote a small parallel
processing framework for this job. It executes parallel jobs so I can benefit
from the 4 cores + 4 virtual cores of my new Core 7i 920 processor. &lt;/p&gt;
&lt;p&gt;The framework is based on some shell scripts running on 
Linux (or probably any other unix) and Lame.&lt;/p&gt;
&lt;p&gt;I wanted to test how much impact hyperthreading has on parallel performance.
Hyperthreading has a bit of a bad reputation, dating back from the P4 erra.
Lots of 'hype' but no significant performance gain. &lt;/p&gt;
&lt;p&gt;I ran some test with hyperthreading enabled and disabled (within the bios). &lt;/p&gt;
&lt;p&gt;In this test I used 16 wav files and encoded them to mp3 with Lame under
Linux. The data in the table are raw measurements.&lt;/p&gt;
&lt;p&gt;&lt;img alt="" src="/static/images/ht1.png" /&gt;&lt;/p&gt;
&lt;p&gt;As you can see, the benefit of HT is there and can improve processing speed
with up to 24 percent. &lt;/p&gt;
&lt;p&gt;As more threads are run simultaneously, processing speed is significantly
improved comparted to non-HT. &lt;/p&gt;</content><category term="Hardware"></category><category term="hyper-threading hyperthreading"></category></entry><entry><title>Rebooting results in degraded RAID array using Debian Lenny</title><link href="https://louwrentius.com/rebooting-results-in-degraded-raid-array-using-debian-lenny.html" rel="alternate"></link><published>2008-12-24T16:49:00+01:00</published><updated>2008-12-24T16:49:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2008-12-24:/rebooting-results-in-degraded-raid-array-using-debian-lenny.html</id><summary type="html">&lt;p&gt;As described earlier, I setup a RAID 6 array consisting of physical 1 TB disk
and 'virtual' 1 TB disks that are in fact two 0.5 TB disks in RAID 0. &lt;/p&gt;
&lt;p&gt;I wanted to upgrade to Lenny because the new kernel that ships with Lenny
supports growing a RAID …&lt;/p&gt;</summary><content type="html">&lt;p&gt;As described earlier, I setup a RAID 6 array consisting of physical 1 TB disk
and 'virtual' 1 TB disks that are in fact two 0.5 TB disks in RAID 0. &lt;/p&gt;
&lt;p&gt;I wanted to upgrade to Lenny because the new kernel that ships with Lenny
supports growing a RAID 6 array. After installing Lenny the RAID 0 devices
were running smootly, but not recognised as part of the RAID 6. &lt;/p&gt;
&lt;p&gt;So the array was running in degraded mode. That is bad.&lt;/p&gt;
&lt;p&gt;In Lenny, a new version of mdadm is used that requires the presense of the
mdadm.conf file. The mdadm.conf file contains these lines: &lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="gh"&gt;#&lt;/span&gt;DEVICE partitions
&lt;span class="gh"&gt;#&lt;/span&gt;DEVICE /dev/md*
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;After I uncommented the "DEVICE /dev/md*" line and generated a new initramfs
file with:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;update-initramfs -u
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The RAID 0 drives were recognised as part of a RAID array and everything was
OK again. So mdadm must be instructed to check if /dev/md? devices are a
member of a RAID array. &lt;/p&gt;
&lt;p&gt;I guess this is also relevant if you are running a RAID 10 based on a mirrored
stripe or a striped mirror.&lt;/p&gt;</content><category term="Linux"></category><category term="Uncategorized"></category></entry><entry><title>Calculating EXT2 EXT3 EXT4 stride size when using RAID</title><link href="https://louwrentius.com/calculating-ext2-ext3-ext4-stride-size-when-using-raid.html" rel="alternate"></link><published>2008-12-20T22:15:00+01:00</published><updated>2008-12-20T22:15:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2008-12-20:/calculating-ext2-ext3-ext4-stride-size-when-using-raid.html</id><summary type="html">&lt;p&gt;When formatting a RAID device with an EXT filesystem, it is always advised to
specify a stride size. The format utility will take this stride size into
account when formatting a device. The stride size is the number you get when
you divide the 'chunck' size, as specified with MDADM …&lt;/p&gt;</summary><content type="html">&lt;p&gt;When formatting a RAID device with an EXT filesystem, it is always advised to
specify a stride size. The format utility will take this stride size into
account when formatting a device. The stride size is the number you get when
you divide the 'chunck' size, as specified with MDADM by the filesystem block
size (almost always 4K).&lt;/p&gt;
&lt;p&gt;So a 128 KB chunck size gives you a stride of 32. A nice and simple utility to
calculate your stride can be found here:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://busybox.net/~aldot/mkfs_stride.html"&gt;http://busybox.net/~aldot/mkfs_stride.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Please note that the stripe-with option does not seem to work on Debian Etch.
Maybe because that option is too old or too new.&lt;/p&gt;
&lt;p&gt;I also think that most recent tools automatically detect and calculate the correct 
stride size for you.&lt;/p&gt;</content><category term="Storage"></category><category term="Uncategorized"></category></entry><entry><title>Did you know that RDP is as insecure as telnet?</title><link href="https://louwrentius.com/did-you-know-that-rdp-is-as-insecure-as-telnet.html" rel="alternate"></link><published>2008-11-21T09:47:00+01:00</published><updated>2008-11-21T09:47:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2008-11-21:/did-you-know-that-rdp-is-as-insecure-as-telnet.html</id><summary type="html">&lt;p&gt;Microsoft developed the remote desktop protocol in order to allow remote GUI-
based access to hosts. It is easy to pick on Microsoft and I do respect what
they have accomplished, however, it may not come as a surprise that they did
it all wrong with RDP in terms of …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Microsoft developed the remote desktop protocol in order to allow remote GUI-
based access to hosts. It is easy to pick on Microsoft and I do respect what
they have accomplished, however, it may not come as a surprise that they did
it all wrong with RDP in terms of security.&lt;/p&gt;
&lt;p&gt;By default, out-of-the-box, a server or desktop supports RDP with RC4
encryption. Most recent servers and clients support 128-bits encryption.
Sounds secure, right? Well, it isn't unfortunately. There are two major
security weaknesses that affect the remote desktop protocol.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Keystroke vulnerability&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The original , RDP version 4.0, is based on the T.128 protocol. Input events,
such as keystrokes, are sent to the server with a unique timestamp as
specified in the T.128 standard. This means that each message containing an
input event is unique and so will be the checksum generated from this input
event data. In other words: if two RDP messages contain the same data, they
will still be unique because of the timestamp. Microsoft did it right in RDP
version 4.0, but there was one drawback: the timestamp generated overhead,
making RDP messages relatively large.&lt;/p&gt;
&lt;p&gt;So in RDP version 5.0, Microsoft decided to drop the timestamp. Thus, if two
RDP messages contain the same data, their checksum will be identical. So if
the same key is pressed twice, the two packets that are generated will contain
different encrypted data segments but the checksums will be identical.
Although it is not clear which key is pressed, if a larger data set is
analyzed, patterns can be discovered and an accurate guess can be made to wich
keys are pressed.&lt;/p&gt;
&lt;p&gt;In a nuttshell an attacker that has access to the data stream of an RDP
session will be able to decode keystrokes and thus be able to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;obtain your username and password, allowing access to the system!&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;read what you type.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Please visit the following location as it provides a more detailed explanation
of this vulnerability:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.xatrix.org/article.php?s=1943"&gt;http://www.xatrix.org/article.php?s=1943&lt;/a&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Man-in-the-middle-attack&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At first, back in 2003, RDP-clients did not verify the identity of the RDP
server. So an attacker can easily spoof an identity and perform a man-in-the-
middle-attack. Microsoft attempted to fix this vulnerability but did not
succeed. Although more recent RDP-clients do verify the identify of the
server, it is still possible to spoof this identity. Microsoft uses a fixed
hard-coded key to identify the server with. This key resided in a system DLL
that is available to the general public. An attacker can easily obtain this
key, create a fake identity and still prove to be legitimate since the
identity is signed with a trusted key.&lt;/p&gt;
&lt;p&gt;More detailed information:&lt;/p&gt;
&lt;p&gt;&lt;a href="http://www.oxid.it/downloads/rdp-gbu.pdf"&gt;http://www.oxid.it/downloads/rdp-gbu.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Conclusion&lt;/p&gt;
&lt;p&gt;Given these vulnerabilities, RDP is, in its default configuration, virtually
as insecure as plain old telnet. Think about that the next time you logon with
administrator credentials using RDP.&lt;/p&gt;
&lt;p&gt;Tool that implements the attacks&lt;/p&gt;
&lt;p&gt;The well-known hack-tool Cain &amp;amp; Abel has a build-in RDP man-in-the-midlle
implementation. It automatically decrypts (parts of) RDP data packets and
reveals the key strokes that are send to a server. So if you don't believe
this story and just want to see for yourself how easy such an attack really
is, download this tool and test it for yourself.&lt;/p&gt;
&lt;p&gt;To defend against the attacks&lt;/p&gt;
&lt;p&gt;To protect against this attack, it is strongly advised to implement RDP based
on TLS/SSL encryption. Ofcourse, you'll have to install a server-side SSL-
certificate and it may be necessary to obtain an official (as in not self-)
signed certificate.&lt;/p&gt;
&lt;p&gt;&lt;a href="http://technet2.microsoft.com/windowsserver/en/library/a92d8eb9-f53d-"&gt;http://technet2.microsoft.com/windowsserver/en/library/a92d8eb9-f53d-4e86-ac9b-29fd6146977b1033.mspx?mfr=true&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;So please understand the risks of using RDP, especially over the Internet. If
TLS is for some reason not an option, use an encrypted VPN-connection to
tunnel the RDP-session in, as an added layer of protection. Never use RDP in
its default configuration over the interne without additional protection.&lt;/p&gt;
&lt;p&gt;EDIT: IN RDP version 6.0 the man-in-the-middle-attack is no longer possible!&lt;/p&gt;
&lt;p&gt;EDIT2: Configure your terminal server to use FIPS compliant encryption. It
will use Triple DES encryption even if TLS is not enabled.&lt;/p&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;After you enable this setting on a Windows Server 2003-based computer, the
following is true:&lt;/p&gt;
&lt;/blockquote&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The RDP channel is encrypted by using the 3DES algorithm in Cipher Block
Chaining (CBC) mode with a 168-bit key length.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The SHA-1 algorithm is used to create message digests.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Clients must use the RDP 5.2 client program or a later version to
connect.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So you don't need to use TLS if you want to use a more secure algorithm than
RC4.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;4e86-ac9b-29fd6146977b1033.mspx?mfr=true&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>The kind of sound you never want to hear from your computer</title><link href="https://louwrentius.com/the-kind-of-sound-you-never-want-to-hear-from-your-computer.html" rel="alternate"></link><published>2008-11-14T21:16:00+01:00</published><updated>2008-11-14T21:16:00+01:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2008-11-14:/the-kind-of-sound-you-never-want-to-hear-from-your-computer.html</id><content type="html">&lt;p&gt;&lt;a href="http://datacent.com/hard_drive_sounds.php"&gt;The sounds of hard drives that have perished&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;These sounds are only frightening or scary if you imagine your precious data
only exists on that failing drive. If you make consistent and frequent backups
and/or you run a fault-tolerant RAID flavour, it is but a nuisance. &lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Is storage really that cheap?</title><link href="https://louwrentius.com/is-storage-really-that-cheap.html" rel="alternate"></link><published>2008-10-05T21:47:00+02:00</published><updated>2008-10-05T21:47:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2008-10-05:/is-storage-really-that-cheap.html</id><summary type="html">&lt;p&gt;Nowadays you can buy a 1 TB harddrive for less than 100 euro's . So I build
myself a 4 TB NAS box, which is already 50% full. However, although it is to
some degree fault-tollerant by using RAID 6, one mistake or catastrophic
hardware faillure and all data is lost …&lt;/p&gt;</summary><content type="html">&lt;p&gt;Nowadays you can buy a 1 TB harddrive for less than 100 euro's . So I build
myself a 4 TB NAS box, which is already 50% full. However, although it is to
some degree fault-tollerant by using RAID 6, one mistake or catastrophic
hardware faillure and all data is lost.&lt;/p&gt;
&lt;p&gt;And that's where the 'problem' start. For every € spend on storage, you may
need another € to secure that storage.&lt;/p&gt;
&lt;p&gt;You can choose to take and accept the risk outlined earlier and not to make
backups. However, if you do want to make backups of terrabytes of storage, how
are you going to pull that off without too much cost? &lt;/p&gt;
&lt;p&gt;In my opinion, the only reliable and usable solution is to build a second NAS
box and sync the two. Ideally, both machines reside at different locations, but
hey, I'm talking about a home solution, not a professional environment.
Although you might ask why on earth you need 4 TB of space at home in the first place.&lt;/p&gt;
&lt;p&gt;Anyway, the point I'm trying to make is that although storage in itself is
cheap, if you want to keep all that data safe, storage is far more expensive than 
you might think.&lt;/p&gt;</content><category term="Storage"></category><category term="Uncategorized"></category></entry><entry><title>Highpoint RocketRAID 2320 on Debian howto</title><link href="https://louwrentius.com/highpoint-rocketraid-2320-on-debian-howto.html" rel="alternate"></link><published>2008-09-28T08:06:00+02:00</published><updated>2008-09-28T08:06:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2008-09-28:/highpoint-rocketraid-2320-on-debian-howto.html</id><summary type="html">&lt;p&gt;Get the 'open source' driver from www.highpoint-tech.com. (It's not open
source, it uses a closed binary driver.)&lt;/p&gt;
&lt;p&gt;http://www.highpoint-tech.com/USA/bios_rr2320.htm&lt;/p&gt;
&lt;p&gt;Install the kernel headers if you haven't already&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;apt&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;linux&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;k7&lt;/span&gt;

&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;extract&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;downloaded&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;tar …&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</summary><content type="html">&lt;p&gt;Get the 'open source' driver from www.highpoint-tech.com. (It's not open
source, it uses a closed binary driver.)&lt;/p&gt;
&lt;p&gt;http://www.highpoint-tech.com/USA/bios_rr2320.htm&lt;/p&gt;
&lt;p&gt;Install the kernel headers if you haven't already&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;apt&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;linux&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;2.6&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;k7&lt;/span&gt;

&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;extract&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;downloaded&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;tar&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;xzf&lt;/span&gt;

&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;product&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;rr232x&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;linux&lt;/span&gt;

&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;make&amp;#39;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;As of 28 september 2008, this gives no errors.&lt;/p&gt;
&lt;p&gt;However, a 'make install' will fail. This is cause by the fact that the
installation script uses mkinitrd, which is obsolete: the mkinitramfs command
should be used. So we have to edit a script.&lt;/p&gt;
&lt;p&gt;**Update Dec 2009: the latest drivers do not need the following steps anymore.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;vi&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;osm&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;linux&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;install&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sh&lt;/span&gt;

&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;replace&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;all&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;occurrences&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mkinitrd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mkinitramfs&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;press&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt;
&lt;span class="nf"&gt;%s&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mkinitrd&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;mkinitramfs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;g&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;then&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="n"&gt;wq&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Continue here:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Go&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;back&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;product&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;rr232x&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;linux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;directory&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;#39;&lt;/span&gt;&lt;span class="nx"&gt;make&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="err"&gt;&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That's it.&lt;/p&gt;</content><category term="Hardware"></category><category term="Uncategorized"></category></entry><entry><title>Server status on your iPhone</title><link href="https://louwrentius.com/server-status-on-your-iphone.html" rel="alternate"></link><published>2008-09-12T12:19:00+02:00</published><updated>2008-09-12T12:19:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2008-09-12:/server-status-on-your-iphone.html</id><content type="html">&lt;p&gt;Totally useless but fun nonetheless. I made an iPhone-compatible webpage
showing the current status of my storage server. It is based on some shell
scripts and some python, using an iPhone-specific website template.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://louwrentius.files.com/2008/09/serverstats.jpg"&gt;&lt;img alt="" src="https://louwrentius.files.com/2008/09/serverstats.jpg?w=200" /&gt;&lt;/a&gt;&lt;/p&gt;</content><category term="Uncategorized"></category><category term="Uncategorized"></category></entry><entry><title>Building a RAID 6 array of mixed drives</title><link href="https://louwrentius.com/building-a-raid-6-array-of-mixed-drives.html" rel="alternate"></link><published>2008-08-10T11:36:00+02:00</published><updated>2008-08-10T11:36:00+02:00</updated><author><name>Louwrentius</name></author><id>tag:louwrentius.com,2008-08-10:/building-a-raid-6-array-of-mixed-drives.html</id><summary type="html">&lt;p&gt;To be honest, 4 TB of storage isn't really necessary for home usage.
However, I like to collect movies in full DVD or HD quality and so I need some
storage.&lt;/p&gt;
&lt;p&gt;I decided to build myself a NAS box based on Debian Etch. Samba is used to
allow clients to …&lt;/p&gt;</summary><content type="html">&lt;p&gt;To be honest, 4 TB of storage isn't really necessary for home usage.
However, I like to collect movies in full DVD or HD quality and so I need some
storage.&lt;/p&gt;
&lt;p&gt;I decided to build myself a NAS box based on Debian Etch. Samba is used to
allow clients to access the data. The machine itself was initially based on 4
x 0.5 TB disks using the four SATA ports on the mainboard. With Linux build-in
support for software RAID, I created a RAID 5 array, giving me 1.5 TB of
storage space. Since a single movie is around 4 GB, the 1.5 TB turned out to
become rather tight.&lt;/p&gt;
&lt;p&gt;So I bought 4 x 1 TB disks and a Highpoint RocketRaid 2320 controller (SATA
4x). I put all 8 disks on this controller.&lt;/p&gt;
&lt;p&gt;I wanted to create a single RAID 6 array using both the 1 TB disks and the 0.5
TB disks. I didn't want to create two separate array's because although it
would have provided additional space, it wouldn't have given me the same
safety level as RAID 6 does.&lt;/p&gt;
&lt;p&gt;I mainly chose for RAID 6 since I cannot afford a backup solution  for this
amount of data. I'm aware that RAID is no substitute for a proper backup, but
it's an accepted risk for me.&lt;/p&gt;
&lt;p&gt;Using both 1 TB disks and 0.5 TB disks, how to create a RAID 6 array using
different drive sizes? The solution is fairly simple. Just put two 0.5 TB
disks together in one RAID 0 volume and you'll have a 'virtual' 1 TB disk.
Since I had four 0.5 TB disks, I could create 2 'virtual' 1 TB disks. &lt;/p&gt;
&lt;p&gt;&lt;a href="/static/images/raid6scheme.png"&gt;&lt;img alt="" src="/static/images/raid6scheme-small.png" /&gt;&lt;/a&gt;The only downside is that I had to skim a little bit of storage
capacity of the native 1 TB drives, because 2 x 0.5 TB provides slightly less
storage space than a single 1 TB disk. We're talking about something like 50
MB here, so It's not a big deal in my opinion. &lt;/p&gt;
&lt;p&gt;The funny thing is that this array actually performs rather well. The disks
are connected using a HighPoint RocketRaid 2320 controller. This controller is
used just for it's SATA-ports, the on-board RAID functionality is not used.
For RAID, I use Linux software RAID, using mdadm. This is how the RAID 6 array
looks like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;        server:~# mdadm --detail /dev/md5
        /dev/md5:
        Version : 00.90.03
  Creation Time : Thu Jul 24 22:40:26 2008
     Raid Level : raid6
     Array Size : 3906359808 (3725.40 GiB 4000.11 GB)
    Device Size : 976589952 (931.35 GiB 1000.03 GB)
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 5
    Persistence : Superblock is persistent
    Update Time : Sun Aug 10 15:36:18 2008
          State : clean
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0
     Chunk Size : 128K
           UUID : 0442e8fa:acd9278e:01f9e43d:ac30fbff (local to host server)
         Events : 0.14170

Number   Major   Minor   RaidDevice State
   0       8        1        0      active sync   /dev/sda1
   1       8       17        1      active sync   /dev/sdb1
   2       8       33        2      active sync   /dev/sdc1
   3       8       49        3      active sync   /dev/sdd1
   4       9        0        4      active sync   /dev/md0
   5       9        1        5      active sync   /dev/md1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And this is how this array performs:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;code&gt;server:~# dd if=/storage/test.bin of=/dev/null bs=1M

10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 45.7107 seconds, 229 MB/s

server:~# dd if=/dev/zero of=/storage/test.bin bs=1M count=10000

10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 81.0798 seconds, 129 MB/s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;With 229 MB/s read performance and 129 MB/s write performance using RAID 6, I
think I should be content.&lt;/p&gt;</content><category term="Storage"></category><category term="Uncategorized"></category></entry></feed>