1. The Minimum Requirements for a Secure System

    Fri 26 November 2010

    The most secure server system is a system that is not connected to a network and turned off. However, little work seems to be getting done this way. So we want to turn systems on and connect them to a network, or even (God forbid) the internet.

    The thing is this. A system connected to a network without any running services is almost as as secure as a system that is turned off. They also share a common property: they are useless. A system starts to get useful if you start running services on them. And make these services accessible from the network for clients.

    Services

    Security on a technical level is all about securing those services. Every service that you enable is an opportunity for an attacker to compromise your system. If a service is not installed or running on your system, it cannot be used to compromise your server.

    If a service is enabled and accessible through the network, it is logically of vital importance that you know:

    1. what does this service do?
    2. what can it be used for?
    3. what steps needs to be taken to properly secure it?

    If you know what a service does, you can understand the potential security risks. If you understand the product you are using, you can secure it properly. Security is all about understanding. If you don't understand what you are running, then it can't be secure.

    Firewalls

    So if you only run required services, why do you need to run a firewall? You don't. Yes that's right. Think about it. A firewall protects services that should not be accessible and allows access to services that should be accessible. If you just disable those services that should not be accessible from the outside, why use a local firewall? You don't want the Internet to access the SNMP-service on your system, you say? But then why not bind it only to the management interface instead of the production interface? You have a separate management network, right?

    Of course, firewalls are a good thing. They are an ADDITIONAL line of defense. They mostly protect you against yourself. If you make a mistake and, by accident, enable some vulnerable service on a system, a properly configured firewall will prevent access to it and save your behind. That is the purpose of a firewall.

    People often wrongly see the firewall as the first line of defense. If you do, you are wrong. The first line of defense is to secure your services.

    The whole point is that there are holes in your firewalls. Those holes allow access to services. Those services may be necessary, like a web server, but nevertheless holes. You are exposing services to the Internet.

    Web applications (or web-based back doors?)

    We are now mostly running web-based applications on the services that we make accessible for the network or the internet. Those applications run on application servers. Yes, these application servers, like Apache Tomcat or IIS ASP.NET need to be secured, but nowadays, they are almost secure by default.

    All security depends on the level of security of the application you are running on your application server. Is your application written well, with security principles in mind? Does it protect against SQL-injection or cross- site scripting? Are sessions predictable? Can a user access data of another user?

    Firewalls don't protect against vulnerabilities in your web applications. You need to do it right at the core level: the application itself. Just like how you harden a system. You must run secure code.

    And be aware that if you run third-party code, watch out for security news. There have been many worms exploiting vulnerable commodity software such as phpBB, Wordpress or similar products.

    This is the really hard part. Deploying secure software and keeping it secure during the development life cycle.

    Patches

    The last fundamental principle of keeping systems secure is keeping up with security patches. Many security vulnerabilities are often only exploitable under specific conditions and may not be that important. But the most important thing is to be aware of vulnerabilities and available patches. Then you can decide for yourself how to act.

    There is always a risk that a security patch breaks functionality. But that's not a real problem, because you have this test environment so you can check first, right?

    Keep up with security patches and non-security patches. If you first have to install 100+ patches to be able to install the latest high-risk security patch, something might break. So then it's choosing between staying vulnerable or going off-line until you have fixed everything.

    Conclusion

    So what are the most basic ingredients for secure systems?

    1. only run required services
    2. harden those required services
    3. deploy a firewall as an additional defense layer
    4. deploy secure application code
    5. keep up-to-date with security patches
    6. Audit and review your systems and application code on a regular basis.

    With this small number of steps, you will be able to protect against a lot of security threats. I don't say this is everything that is necessary. But it is a good foundation to build on. You still have to identify risks that may apply to your particular situation. These risks may require you to take (additional) measures not discussed here.

  2. 'Secure Programming: How to Implement User Account Management'

    Thu 18 November 2010

    Most web applications work like this:

    The application uses a single database account to perform all actions. Users are just some records in a table. Account privileges and roles are part of this table, or separate tables.

    This implies that all security must be designed and build by the application developer. I think this is entirely wrong. There is a big risk:

    In such applications, SQL-injection will allow full control of the entire database.

    This is something that is often overlooked. And the solution is simple. The application should not use a general account with full privileges. The application should use the database account of the user accessing the application. All actions performed by this user are thus limited by the privileges of this database account. The impact of SQL-injection would be significantly reduced.

    The public part of a website is still using an application account, but the privileges of this account can be significantly reduced. To obtain elevated privileges, a user must first authenticate against the application and thus the database.

    Please understand another benefit: it is not required to store username/password combinations of privileged accounts on the application server. The configuration file will only contain the credentials of the unprivileged account. An attacker compromising the application server with limited privileges, won't have access to the database with elevated privileges.

    I understand that this solution requires a bit more work to setup at the start, but once implemented, it reduces complexity and improves security so much.

    Of course, the security of your data is as good as the hardening of your database server. But that's another story.

  3. 'Zabbix Security: Client-Server Communication Seems Insecure'

    Mon 27 September 2010

    Zabbix is a populair tool for monitoring servers, services and network equipment. For monitoring hosts, Zabbix provides an agent that can be installed on the hosts that must be monitored.

    Based on the supplied documentation and some remarks on the internets, the 'security' of Zabbix agents seems to rely on an IP-filter. It only accepts traffic from a specific IP-address. However, the protocol that is used between the Zabbix server and agents is unencrypted and does not seem to employ any additional authentication.

    With a man-in-the-middle attack, pretending to be the Zabbix server, you would be able to compromise all servers running Zabbix. If remote commands are enabled on these hosts, the damage that could be done may be something you don't want to think about. Or maybe you do. Although it is true that for such an attack to be possible, as an attacker you need access to a system within the same network (VLAN) as the server, but none the less, it is just not secure.

    Personally I don't think that Zabbix is suitable for high-security environments, due to the lack of encryption of sensitive data and the weak authentication mechanism.

    Zabbix should employ at least SSL as a means for encrypted transport and use a password or shared secret for authentication. Even better would be the use of client-side certificates such as implemented by the system management tool Puppet.

    [update]

    Please note that Nagios agents also seem to work this way, but I have no experience with Nagios so I can't say for sure.

    And Nagios is widely deployed in the enterprise...

    Tagged as : zabbix security

Page 2 / 2