Articles in the Uncategorized category

  1. Compiling Handbrake CLI on Debian Lenny

    August 03, 2010

    In this post I will show you how to compile Handbrake for Debian Lenny. Please note that although the Handbrake GUI version does compile on Lenny, it crashes with a segmentation fault like this:

    Gtk: gtk_widget_size_allocate(): attempt to allocate widget with width -5 and height 17

    (ghb:1053): GStreamer-CRITICAL **: gst_element_set_state: assertion `GST_IS_ELEMENT (element)' failed

    (ghb:1053): GStreamer-CRITICAL **: gst_element_set_state: assertion `GST_IS_ELEMENT (element)' failed

    (ghb:1053): GLib-GObject-CRITICAL **: g_object_get: assertion `G_IS_OBJECT (object)' failed

    Segmentation fault

    So this post only describes how to compile the command-line version of Handbrake: HandBrakeCLI.

    • Issue the following apt-get commando to install all required libraries and software:

    apt-get install subversion yasm build-essential autoconf libtool zlib1g-dev libbz2-dev intltool libglib2.0-dev libpthread-stubs0-dev

    1. Download the source code at http://sourceforge.net/projects/handbrake/files/

    2. Extract the source code and cd into the new handbrake directory.

    3. Compile handbrake like this:

    ./configure --disable-gtk --launch --force --launch-jobs=2

    The --launch-jobs parameter determines how many parallel threads are used for compiling Handbrake, based on the number of CPU cores of your system. If you have a quad-core CPU you should set this value to 4.

    The resulting binary is called HandBrakeCLI and can be found in the ./build directory. Issue a 'make install' to install this binary onto your system.

    Tagged as : Uncategorized
  2. Debian Lenny and Dell S300 H200 and H700 RAID Controllers

    July 10, 2010

    update november 2010: it is reported in the comments that the next release will support these controllers. However, which controllers are supported is not clear. Also, we may have to wait for quite some time before squeeze will be released.

    Just a quick note: it seems that the new Dell RAID controllers are not supported by the current stable version of Debian: Lenny. The S300, H200 and H700 controllers as supplied with the R310, R410 and 'higher' systems, are thus not a great choice if you want to keep things 'stock'. You may have to run backported kernels and installer images and I couldn't figure out if these controllers actually work with these latest images.

    The Perc 6/i controller is supported by Debian. It seems that it can only be supplied with the R410 and higher systems. If you have any additional information, like actual experience with these controllers, please report this.

    I installed VMware ESX 4.1 and installed DEbian Lenny om top of that without problems.

    Please note that the network cards of the R410 are still not supported and will cause a headache!

    -network-card.html

    Tagged as : Uncategorized
  3. Secure Caching DNS Server on Linux With DJBDNS

    June 12, 2010

    The most commonly used DNS server software is ISC BIND, the "Berkeley Internet Name Daemon". However, this software has a bad security track record and is in my opinion a pain to configure.

    Mr. D.J. Bernstein developed "djbdns", which comes with a guarantee: if anyone finds a security vulnerability within djbdns, you will get one thousand dollars. This price has been claimed once. But djbdns has a far better track record than BIND.

    Well, attaching your own name to your DNS implementation and tying a price to it if someone finds a vulnerability in it, does show some confidence. But there is more to it. D.J. Bernstein already pointed out some important security risks regarding DNS and made djbdns immune against them, even before it became a serious world-wide security issue. However, djbdns is to this day vulnerable to a variant of this type of attack and the dbndns package is as of 2010 still not patched. Although the risk is small, you must be aware of this. I still think that djbdns is less of a security risk, especially regarding buffer overflows, but it is up to you to decide which risk you want to take.

    The nice thing about djbdns is that it consists of several separate programs, that each perform a dedicated task. This is in stark contrast with BIND, which is one single program that performs all DNS functionality. One can argue that djbdns is far more simpler and easy to use.

    So this post is about setting up djbdns on a Debian Linux host as a forwarding server, thus a 'DNS cache'. This is often used to speed up DNS queries. Clients do not have to connect to the DNS server of your ISP but can use your local DNS server. This server will also cache the results of queries, so it will reduce the number of DNS queries that will be sent out to your ISP DNS server or the Internet.

    Debian Lenny has a patched version of djbdns in its repository. The applied patch adds IPV6 support to djbdns. This is how you install it:

    apt-get install dbndns

    The dbndns package is actually a fork of the original djbdns software. Now the program we need to configure is called 'dnscache', which only does one thing: performing recursive DNS queries. This is exactly what we want.

    To keep things secure, the djbdns software must not be run with superuser (root) privileges, so two accounts must be made: one for the service, and one for logging.

    groupadd dnscache

    useradd -g dnscache dnscache

    useradd -g dnscache dnscachelog

    The next step is to configure the dnscache software like this:

    dnscache-conf dnscache dnscachelog /etc/dnscache 192.168.0.10

    The first two options tell dnscache which system user accounts to use for this service. The /etc/dnscache directory stores the dnscache configuration. The last option specifies which IP address to listen on. If you don't specify an IP address, localhost (127.0.0.1) is used. If you want to run a forwarding DNS server for your local network, you need to make dnscache listen on the IP address on your local network, as in the example.

    Djbdns relies on daemontools and in order to be started by daemontools we need to perform one last step:

    ln -s /etc/dnscace /etc/service/

    Within a couple of seconds, the dnscache software will be started by the daemontools software. You can check it out like this:

    svstat /etc/service/dnscache

    A positive result will look like this:

    /etc/service/dnscache: up (pid 6560) 159 seconds

    However, the cache cannot be used just yet. Dnscache is governed by some text- based configuration files in the /etc/dnscache directory. For example, the ./env/IP file contains the IP address that we configured previously on which the service will listen.

    By default, only localhost will be able to access the dnscache. To allow access to all clients on the local network you have to create a file with the name of the network in ./root/ip/. If your network is 192.168.0.0/24 (thus 254 hosts), create a file named 192.168.0:

    Mini:/etc/dnscache/root/ip# pwd

    /etc/dnscache/root/ip

    Mini:/etc/dnscache/root/ip# ls

    192.168.0

    Now clients will be able to use the dnscache. Now you are running a simple forwarding DNS server and it probably took you under ten minutes to configure it. Although djbdns is not very well maintained in Debian Lenny, there is currently not a really good alternative for BIND. PowerDNS is not very secure (buffer overflows) and djbdns / dbndns has in more than 10 years never been affected by this type of vulnerability.

  4. RAID Array Size and Rebuild Speed

    April 03, 2010

    When a disk fails of a RAID 5 array, you are no longer protected against (another) disk failure and thus data loss. During this rebuild time, you are vulnerable. The longer it takes to rebuild your array, the longer you are vulnerable. Especially during a disk-intensive period, because the array must be reconstructed.

    When one disk fails of a RAID 6 array, you are still protected against data loss because the array can take a second disk failure. So RAID 6 is almost always the better choice. Especially with large disks (1+ TB), because the rebuild time largely depends on the size of a single disk, not on the size of the entire RAID array.

    However, there is one catch. The size of the RAID array matters when it becomes big, 10+ drives or more. No matter if you use hardware- or software- based RAID, the processor must read the contents of all drives simultaneously and use that information to rebuild the replaced drive. When creating a large RAID array, such as in my storage array, with 20 disks, the check and rebuild of the array becomes CPU-bound.

    This is because the CPU must process 1,1 GB/s (as in gigabyte!) of data and use that data stream to rebuild that single drive. Using 1 TB drives, it checks or rebuilds the array at about 50 MB/s, which is less than half what the drives are capable of (100+ MB/s). Top shows that indeed the CPU is almost saturated (95%). Please note that a check or rebuild of my storage server takes about 5 hours currently, but that could be way shorter if the CPU was not saturated.

    My array is not for professional use and fast rebuild times are not that of an issue. But if you're more serious about your setup, it may be advised to create more smaller RAID vollumes and glue them together using LVM or some similar solution.

    Tagged as : Uncategorized
  5. 'Linux: Monitor a Directory for Files'

    March 22, 2010

    Inotify is a mechanism in the Linux kernel that reports when a file system event occurs.

    The inotifywait comand line utility can be used in shell scripts to monitor directories for new files. It can also be used to monitor files for changes. Inotifywait must be installed and is often not part of the base installation of your Linux distro. However, if you need to monitor a directory or some task like that, it is worth the effort. (apt-get install inotify-tools).

    Here is how you use it:

    inotifywait -m -r -e close_write /tmp/ | while read LINE; do echo $LINE | awk '{ print $1 $3 }'; done

    Let's dissect this example one part at a time. The most interesting part is this:

    inotifywait -m -r -e close_write /tmp/

    What happens here? First, inotifywait monitors the /tmp directory. The monitoring mode is specified with the -m option, otherwise inotifywait would exit after the first event. The -r option specifies recursion, beware of large directory trees. The -e option is the most important part. You only want to be notified of new files if they are complete. So only after a close_write event should your script be notified of an event. A 'create' event for example, should not cause your script to perform any action, because the file would not be ready yet.

    The remaining part of the example is just to get output like this:

    /tmp/test1234/blablaf

    /tmp/test123

    /tmp/random.bin

    This output can be used to use as an argument to other scripts or functions, in order to perform some kind of action on this file.

    This mechanism is specific to Linux. So it is not a OS independent solution.

Page 2 / 11