1. Redhat Explains Why Chroot Is Not a Security Feature

    Wed 07 August 2013

    I came across this Redhat security blog post that explains why the chroot command has it's uses, but it isn't magic security pixie dust. Running an application from within a chrooted jail or just on a well-configured system would result in the same level of security.

    Josh Bressers:

    Putting a regular user in a chroot() will prevent them from having access to the rest of the system. This means using a chroot is not less secure, but it is not more secure either. If you have proper permissions configured on your system, you are no safer inside a chroot than relying on system permissions to keep a user in check. Of course you can make the argument that everyone makes mistakes, so running inside a chroot is safer than running outside of one where something is going to be misconfigured. This argument is possibly true, but note that setting up a chroot can be far more complex than configuring a system. Configuration mistakes could lead to the chroot environment being less secure than non-chroot environments.

    In the past I've tried to setup a chroot for an application and it was a pain. If you want to do it well, it will take quite some effort and every application has it's own requirements. But why spend all this effort?

    Josh continues:

    it may not be possible to break out of the chroot, but the attacker can still use system resources, such as for sending spam, gaining local network access, joining the system to a botnet, and so on.

    A chroot jail hides the rest of the 'real' file system. But the file system is just one part of the security equation: an attacker that compromised the chrooted application can still execute arbitrary code. Not as the root user, fair enough, but does it really hinder the attacker? The attacker already gained a stepping stone to pivot into the rest of the network1. As a non-privileged user, the attacker can try to exploit local kernel vulnerabilities to gain root access or stage attacks through the network on other hosts.

    If you run some kind of forum or bulletin board, it is probably more likely that this software will be compromised than the web server itself. And the result is often the same: arbitrary code execution with the privileges of the web server software. So the attacker controls the application and thus all it's content, including email addresses and password(hashes).

    A chrooted jail does not provide any additional security in this scenario. It may be a bit more difficult to access the rest of the file system, but if the attacker has access as an unprivileged user and file system permissions are set properly, is there a benefit?

    I believe it is more wise to invest your time configuring proper file system privileges and propagate them through puppet, chef or ansible. And run some scripts to audit/validate file system privileges.

    Update

    If applications support chroot, it might still be wise to enable it. It's often very easy to configure and it will probably delay an attacker.


    1. If you implemented network segmentation properly and have a sane firewall, the impact could be limited. 

    Tagged as : Security Chroot
  2. Overview of Open-Source Load Balancers

    Wed 07 August 2013

    I was looking at open-source load balancing software and it seems that there isn't a nice overview except from this website, although many of the listed projects seem dead.

    I've made a selection of products that seem to be relevant. The biggest problem with open-source software is that projects are abandoned or unmaintained. So I created this table and added a column 'last product update' which gives you a feel for how active the project is.

    Product Last product update
    ngnix 2013 July
    Lighttpd November 2012
    HAproxy 2013 June
    Pound 2011 December
    Varnish 2013 June
    Zen Load Balancer 2013 February
    Apache 2013 July
    Linux Virtual Server Unmaintained?
    XLB HTTP Load Balancer 2009 February
    Octopus Load Balancer 2011 November
    Squid 2013 July
    Date of measurement: August 2013

    I currently don't have hands-on experience with these products. Some of those products are briefly discussed at this blog - worth a visit.

    There are many more products but most seem to be abandoned years ago. If you feel there are more products that are noteworthy but not in this list, feel free to contact me or comment about it.

    It seems that the top-3 web servers like ngnix, Apache and Lighttpd all have support for load balancing. It depends on your needs, time and knowledge if you want to invest in other products or stick with the web server software you know.

    At this location some people are talking about the pro's and con's of commercial off-the-shelve products vs. home-grown open-source solutions.

    Tagged as : load-balancing
  3. I Switched My Blog From Blogofile to Pelican

    Tue 06 August 2013

    This blog is a static website, which makes it fast, simple and secure. It was generated by Blogofile but I switched to Pelican.

    Blogofile has seen almost no updates over the years and I consider the project dead. Realising that blogofile is dead, I decided to look around for different open source static blog generators.

    There are many other static blog generators than pelican. But Pelican is well-documented, is based on Python, is very actively maintained (good track record) and supports all features that I wanted. Some of those features are Atom/RSS, Disqus and Google analytics support.

    My blog posts are written using Markdown. This makes it very easy to migrate away from Blogofile to Pelican, as Pelican also supports Markdown. Blogofile uses a different header format not recognised by Pelican, so you have to search and replace some key words in all your files before Pelican can actually generate your new website.

    I wrote this horrible bash shell for-loop to process all my blog posts:

        :::bash
        for x in *.md
        do
            TITLE=`grep -i "title:" "$x"`
            TITLEFIXED=`echo $TITLE | sed s/\"//g`
            DATE=`grep -i "date:" "$x"`
            DATEFIXED=`echo $DATE | sed "s/\//-/g" | cut -d ":" -f 1,2,3`
            CATEGORY=`grep -i "categories:" "$x"`
            CATEGORYFIXED=`echo $CATEGORY | sed s/\"//g | sed s/categories/category/g | cut -d "," -f 1 | /usr/local/bin/sed -e "s/\b\(.\)/\u\1/g"`
            echo "$TITLEFIXED" > tmp.txt
            echo "$CATEGORYFIXED" >> tmp.txt
            echo "$DATEFIXED" >> tmp.txt
            grep -v "title:" "$x" | grep -v -e '---' | grep -v -i "date:" | grep -v -i "categories:" >> tmp.txt
            mv tmp.txt "$x"
        done
    

    Notice how Build-in syntax highlighting of Pelican applies nice colors to this horrible code. Regarding this horrible code: I had to use GNU sed as the Mac OS X sed did not support the regular expression I used.

    To enable comments for my blog posts I always used Disqus with Blogofile. Pelican generates web pages in a different way compared to blogofile, so all old pages need to be redirected to the new location. I used the redirect functionality of Lighttpd to redirect all existing pages to the new location.

    The cool thing is that Disqus has a tool called "Redirect Crawler". If you have configured 301 "permanent redirects" for all pages and run this tool, Disqus will automatically update all existing links to the new locations, so your comments are migrated to the new web pages.

    Furthermore, I've implemented a Pelican plugin called titlecase which capitalizes the first letter of words in the title of your article. It's just that I think it looks better.

    I think I'm really happy with Pelican.

Page 26 / 73