Articles in the Security category

  1. Systemd Forward Secure Sealing of System Logs Makes Little Sense

    November 22, 2014

    Systemd is a more modern replacement of sysvinit and its in the process of being integrated into most mainstream Linux distributions. I'm a bit troubled by one of it's features.

    I'd like to discuss the Forward Secure Sealing (FSS) feature for log files that is part of systemd. FSS cryptographically signs the local system logs, so you can check if log files have been altered. This should make it more difficult for an attacker to hide his or her tracks.

    Regarding log files, an attacker can do two things:

    1. delete them
    2. alter them (remove / change incriminating lines)

    The FSS feature does not prevent any of these risks. But it does help you detect that there is something fishy going on if you would verify the signatures regularly. So basically FSS acts a bit like Tripwire.

    FSS can only tell you wether or not a log file has been changed. It cannot tell you anything else. More specifically, it cannot tell you the reason why. So I wonder how valuable this feature is.

    There is also something else. Signing (sealing) a log file is done every 15 minutes by default. This gives an attacker ample time to alter or delete the most recent log events, often exactly those events that need to be altered/deleted. Even lowering this number to 10 seconds would allow an attacker to delete (some) initial activities using automation. So how useful is this?

    What may help in determining what happened to a system is the unaltered log contents themselves. What FSS cannot do by principle is protect the actual contents of the log file. If you want to preserve log events the only secure option is to send them to an external log host (assumed not accessible by an attacker).

    However, to my surprise, FSS is presented as an alternative to external logging. Quote from Lennart Poettering:

    Traditionally this problem has been dealt with by having an external secured log server 
    to instantly log to, or even a local line printer directly connected to the log system. 
    But these solutions are more complex to set up, require external infrastructure and have 
    certain scalability problems. With FSS we now have a simple alternative that works without 
    any external infrastructure.
    

    This quote is quite troubling because it fails to acknowledge one of the raison d'ĂȘtre of external log hosts. It seems to suggest that FSS provides an alternative for external logging, where in fact it does not and cannot do so on principle. It can never address the fact that an attacker can alter or delete logs, whereas external logging can mitigate this risk.

    It seems to me that systemd now also wants to play the role as some crude intrusion detection system. It feels a bit like scope creep to me.

    Personally I just wonder what more useful features could have been implemented instead of allowing you to transfer a log file verification key using a QR code to your smartphone (What the hell?).

    This whole observation is not original, in the comments of the systemd author's blogpost, the same argument is made by Andrew Wyatt (two years earlier). The response from the systemd author was to block him. (see the comments of Lennart Poettering's blogpost I linked to earlier).

    Update: Andrew Wyatt behaved a bit immature towards Lennart Poettering at first so I understand some resentment from his side, but Andrews criticism was valid and never addressed by him.

    If the systemd author would just have implemented sending log events to an external log server, that would have been way more useful security-wise, I think. Until then, this may do...

    Tagged as : Logging
  2. Why You Should Not Use IPsec for VPN Connectivity

    January 28, 2014

    IPsec is a well-known and widely-used VPN solution. It seems that it's not widely known that Niels Ferguson and Bruce Schneier performed a detailed security analysis of IPsec and that the results were not very positive.

    We strongly discourage the use of IPsec in its current form for protection of any kind of valuable information, and hope that future iterations of the design will be improved.
    

    I conveniently left out the second part:

    However, we even more strongly discourage any current alterantives, and recommend IPsec when the alternative is an insecure network. Such are the realities of the world.
    

    To put this in context: keep in mind that this paper was released in 2003 and the actual research may even be older (1999!). OpenVPN, an open-source SSL-based VPN solution was born in 2001 and was still maturing in 2003. So there actually was no real alternative back then.

    It worries me that this research done by Ferguson and Schneier is more than a decade old. I've been looking for more recent articles on the current security status of IPsec, but I couldn't find much. There have been some new RFCs been published about IPsec but I'm not familiar enough with the material to understand the implications. They make a lot of recommendations in the paper to improve IPsec security, but are they actually implemented?

    I did find a presentation from 2013 by Peter Gutmann (University of Auckland). Based on his Wikipedia page, he seems to 'have some knowledge' about cryptography. The paper adresses the Snowden leaks about the NSA and also touches on IPsec. He basically relies on the paper written by Ferguson and Schneier.

    But let's think about this: Ferguson and Schneier criticises the design of IPsec. It is flawed by design. That's one of the worst criticisms any thing related to cryptography can get. That design has probably not changed much, from what I understand. So if their critique on IPsec is still mostly valid, all the more reason not to use IPsec.

    So this is part of the conclusion and it doesn't beat around the bush:

    We have found serious security weaknesses in all major components of IPsec.
    As always in security, there is no prize for getting 90% right; you have to get
    everything right. IPsec falls well short of that target, and will require some major
    changes before it can possibly provide a good level of security.
    What worries us more than the weaknesses we have identied is the complexity
    of the system. In our opinion, current evaluation methods cannot handle
    systems of such a high complexity, and current implementation methods are not
    capable of creating a secure implementation of a system as complex as this.
    

    So if not IPsec, what should you use? I would opt to use an SSL/TLS-based VPN solution like OpenVPN.

    I can't vouch for the security for OpenVPN, but a well-known Dutch security firm Fox-IT has released a stripped-down version of the OpenVPN software (removed features) that they consider fit for (Dutch) governmental use. Not to say that you should use that particular OpenVPN version: the point is that OpenVPN is deemed secure enough to be used for governmental usage. For whatever that's worth.

    At least, SSL-based VPN solutions have the benefit that they use SSL/TLS, which may have it's own problems, but is at least not as complex as IPsec.

    Tagged as : IPsec security
  3. Redhat Explains Why Chroot Is Not a Security Feature

    August 07, 2013

    I came across this Redhat security blog post that explains why the chroot command has it's uses, but it isn't magic security pixie dust. Running an application from within a chrooted jail or just on a well-configured system would result in the same level of security.

    Josh Bressers:

    Putting a regular user in a chroot() will prevent them from having access to the rest of the system. This means using a chroot is not less secure, but it is not more secure either. If you have proper permissions configured on your system, you are no safer inside a chroot than relying on system permissions to keep a user in check. Of course you can make the argument that everyone makes mistakes, so running inside a chroot is safer than running outside of one where something is going to be misconfigured. This argument is possibly true, but note that setting up a chroot can be far more complex than configuring a system. Configuration mistakes could lead to the chroot environment being less secure than non-chroot environments.

    In the past I've tried to setup a chroot for an application and it was a pain. If you want to do it well, it will take quite some effort and every application has it's own requirements. But why spend all this effort?

    Josh continues:

    it may not be possible to break out of the chroot, but the attacker can still use system resources, such as for sending spam, gaining local network access, joining the system to a botnet, and so on.

    A chroot jail hides the rest of the 'real' file system. But the file system is just one part of the security equation: an attacker that compromised the chrooted application can still execute arbitrary code. Not as the root user, fair enough, but does it really hinder the attacker? The attacker already gained a stepping stone to pivot into the rest of the network1. As a non-privileged user, the attacker can try to exploit local kernel vulnerabilities to gain root access or stage attacks through the network on other hosts.

    If you run some kind of forum or bulletin board, it is probably more likely that this software will be compromised than the web server itself. And the result is often the same: arbitrary code execution with the privileges of the web server software. So the attacker controls the application and thus all it's content, including email addresses and password(hashes).

    A chrooted jail does not provide any additional security in this scenario. It may be a bit more difficult to access the rest of the file system, but if the attacker has access as an unprivileged user and file system permissions are set properly, is there a benefit?

    I believe it is more wise to invest your time configuring proper file system privileges and propagate them through puppet, chef or ansible. And run some scripts to audit/validate file system privileges.

    Update

    If applications support chroot, it might still be wise to enable it. It's often very easy to configure and it will probably delay an attacker.


    1. If you implemented network segmentation properly and have a sane firewall, the impact could be limited. 

    Tagged as : Security Chroot
  4. Don't Use Cloud Services if You Care About Secrecy of Your Data

    June 30, 2013

    When you use cloud services, you are storing your data on other people's hard drives. The moment you put your data within a cloud service, that data is no longer under your control. You don't know who will access that data. Secrecy is lost.

    Instead of using services like Gmail you may opt to setup some virtual private server and run your own email server, but that doesn't change a thing. The cloud provider controls the hardware, they have access to every bit you store on their platform.

    If you encrypt the hard drive of your VPS you need to enter the encryption password every time you reboot your VPS. And how can you remotely type in the password? On the VPS console, a piece of software written by and under control of your cloud provider. They can snoop on every character you enter.

    This may all sound far-fetched but it's about the principle of how things work. If you store unencrypted data on hardware that is not owned by you and under your physical control, that data cannot be trusted to stay secret.

    If you care about the secrecy of your data, you should never store it with a cloud provider or any other third party.

    I believe that the price you have to pay for any decent secrecy of your data is to run your own physical server. This is way more expensive in terms of time and money than using a cloud service, so it's up to you if it's worth it.

    Although your own server will probably prevent your data being souped up with dragnet government surveillance, it will still be difficult if not impossible to protect you from a targeted investigation by a government agency.

    A government agency can obtain physical access to your server and physical access is often the deathblow to any secrecy / security. Even if you implement encryption in the right manner, you are only decreasing the chance of their success of accessing your data, you are not eliminating their chances.

    And in the end, a $5 wrench will probably do wonders for them. It seems that it even does wonders against encrypted hidden volumes.

    But there may still be a small benefit. If a government agency requires a cloud service provider to hand over your data, they can do so without your knowledge. A gag order will prohibit the cloud provider from informing you. However, if the servers are your own and are located within a building you own, either privately or as a company, you are at least aware of what's happening. That may or may not be relevant to you, that's up to you to decide.

  5. Linode Hacked: The Dark Side of Cloud Hosting

    April 16, 2013

    Linode has released an update about the security incident first reported on April 12, 2013.

    The Linode Manager is the environment where you control your virtual private servers and where you pay for services. This is the environment that got compromised.

    Linode uses Adobe's ColdFusion as a platform for their Linode Manager application. It seems that the ColdFusion software was affected by two significant, previously unknown vulnerabilities that allowed attackers to compromise the entire Linode VPS management environment.

    As the attackers had control over the virtual private servers hosted on the platform, they decided to compromise the VPS used by Nmap. Yes, the famous port scanner.

    Fyodor's remark about the incident:

    I guess we've seen the dark side of cloud hosting.
    

    That's the thing. Cloud hosting is just an extra layer, an extra attack surface, that may provide an attacker with the opportunity to compromise your server and thus your data.

    Even the author of Nmap, a person fairly conscious about security and aware of the risk of cloud-hosting, still took the risk to save a few bucks and some time setting something up himself.

    If you are a Linode customer and consider becoming a former customer by fleeing to another cheap cloud VPS provider, are you really sure you are solving your problems?

    When using cloud services, you pay less and you outsource the chores that come with hosting on a dedicated private server.

    You also lose control over security.
    

    Cloud hosting is just storing your data on 'Other People's Hard Drives. So the security of your stuff depends on those 'other people'. But did you ask those 'other people' for any information about how they tink to address risks like zero-days or other security threats? Or did you just consider their pricing, gave them your credit card and got on with your life?

    If you left Linode for another cloud VPS provider, what assures you that they will do better? How do you know that they aren't compromised already right now? At this moment? You feel paranoid already?

    We all want cheap hosting, but are you also willing to pay the price when the cloud platform is compromised?

Page 1 / 5