Tag Archives: linux

Why Heartbleed did more to increase security than any friendly advice you ever gave

Doing IT operations during these heartbleeding times isn’t easy. If you’ve been following even remotely the Heartbleed saga, you know it’s a critial design flaw in the most popular implementation of the most popular encryption protocol, OpenSSL. To be plain, most of the credentials you held in the recent past are potentially compromised, since even if eveyone are now updated and protected, your credentials might have been stolen beforehand.

If you indeed do operations, you know it gets painful logging into another web console of some service provider to see a warning banner with the latest security advisory about Heartbleed. It usually goes in the same spirit of “please reset all your passwords, please recycle all your keys, please reissue all your certificates”. And the more responsible of us do: We reset all our passwords, we annihilate the deployment keys we used for the last year, and certificates older than the buildings we work in are being reissued. We update and patch all our servers, and we have a lot of servers. We mass-email users to change their login credentials, and go on a frenzy of forcing everyone to enable 2-factor authentication and security questions. And we’re probably just getting started.

We press hard, and the users, for once as terrified as we are, are being cooperative. Hours are spent on memorizing, forgetting and resetting new passwords, patching servers, and fixing outages caused by revoked keys and certificates. Everyone gets a crash course in encryption. Security is on everyone’s agenda. Users who thought certificates were for gifts now ask about SSL. Naturally, educating, updating, revoking and reissuing is time consuming, and the regular workday goes down the drain. Yet another compromised vendor or service, yet another unpatched server.

And here you should stop and think about what you are going through: if you do value security with more than just your lips, you realize this is one of the best things that happened to your operation. You are basically forced to do the very same things you knew you should have been doing all along, but always left for later. Recycling my passwords? New ones are hard to remember! Reissuing certificates on a regular basis? What kind of a sadist would put that in OpenVPN’s Wiki! Changing deployment keys? Doesn’t look amazing on anyone’s resume! Just stop and try to remember how many times, before Heartbleed, have your users/coworkers/clients cared about the same thing you do? How many times have they came to you asking you about updating a server, instead of you suggesting to them in a friendly manner they should really apply that last year of updates on their corporate laptop?

Heartbleed is the best thing that happened to network security since firewalls: Very few credentials were likely compromised via Heartbleed, but everyone is rotating their credentials and protecting themselves from more common vulnerabilities and leaked passwords. It’s hard to appreciate it between all the password resets, but when you are done with it all, sit back and have a beer. To Heartbleed!




1 Comment

Filed under #!

TrueCrypt + BTRFS

TrueCrypt is the weapon of choice for easy end-to-end filesystem encryption, and conveniently supports FAT, NTFS, and EXT2/3/4 out of the box. This means all you have to do is specify the filesystem during the creation of the encrypted volume, and it will be automatically mounted when the volume is unlocked. That’s great!

…But wait, I don’t want any Ext4, I want the latest and greatest BTRFS (ooooh copy on write…). Luckily, it’s only slightly more complicated, and requires treating a TrueCrypt volume like, well, a volume and not a filesystem: Create a volume and make it available, then interact with the filesystem on the volume outside of TrueCrypt.

truecrypt --text --create --filesystem=none /dev/sdx1

truecrypt --text --mount --filesystem=none --keyfiles= --volume-type=normal --protect-hidden=no --slot=1 /dev/sdx1

mkfs.btrfs /dev/mapper/truecrypt1

mount /dev/mapper/truecrypt1 /mnt

To dismount the filesystem and then the volume:

umount /mnt

truecrypt --text --dismount /dev/sdx1

1 Comment

Filed under #!, Slackware, Uncategorized

Encrypting a Linux home partition with Truecrypt


This post will be short (and sweet). We will secure the majority of our personal data by encrypting our home partition. This is important for users with personal or sensitive data on their laptops, as well as other mobile devices such as the Google Nexus 7 when it runs Ubuntu Linux.

General Information:

The steps to encrypt a partition with Truecrypt are probably the easiest ones compared to alternatives such as LUKS and other Linux Kernel built in tools. This involves installing Truecrypt, creating an encrypted partition, copying all the sensitive data into it, deleting the sensitive data from the unencrypted partition it was previously on, and configuring mounting and umounting of the Truecrypt volume during startup/shutdown. You will need to perform this as the root user, and you will need an empty partition which you can encrypt. The steps are generic: they assume you are encrypting a brand new home partition (and not something else), after storing your user data under the /home folder on the root partition. They have been tested on Slackware64 but will work on all Linux distributions. Please adjust the partitions, runlevel scripts and installation procedure for your Linux distribution (as an example, for Ubuntu, Truecrypt might be available via Aptitude repositories vs. a binary installation package, and the runlevels will not be in traditional BSD style).


  1. Install Truecrypt after downloading from here:
    # tar vxf ./truecrypt-7.1a-linux-x64.tar.gz
    # ./truecrypt-7.1a-setup-x64
  2. Create an encrypted Truecrypt partition. You will be asked about the partition, passwords and keyfiles to use:
    # truecrypt --text --create
  3. Mount the new encrypted volume in a temporary location and copy all sensitive data to it. This should be done as root from singleuser runlevel if operating on the /home folder:
    # telinit 1
    # mkdir /tmp/encrypted
    # /usr/bin/truecrypt --text --mount --protect-hidden=no --volume-type=normal --keyfiles= /dev/sda6 /tmp/encrypted
    # cp -aR --preserve=all /home/* /tmp/encrypted/
    # rm -rf /home/*
  4. Configure mounting/unmounting on startup/shutdown:
    Edit /etc/rc.d/rc.S and add the following line after “/sbin/mount -a …”:

    /usr/bin/truecrypt --text --mount --protect-hidden=no --volume-type=normal --keyfiles= /dev/sda6 /home

    Edit /etc/rc.d/rc.6 and add the following line before “/sbin/umount -a …”:

    /usr/bin/truecrypt --text --dismount /dev/sda6
  5. Test with a reboot!

1 Comment

Filed under #!, Slackware

Virtual Appliance with Debian Squeeze and OpenWRT-XBurst Development Tools for Qi Hardware’s Ben Nanonote

This post is about a Virtual Appliance with Debian Squeeze and OpenWRT-XBurst Development Tools installed, which would allow immediately compiling OpenWRT packages for the Nanonote without going through the painful process of setting up the development environment yourself.

As a non-developer, I found a working development environment to be the single most confusing part of porting to the Nanonote, even more confusing than OpenWRT’s Makefiles. Granted, this could be my personal lack of talent or skill, but it left me thinking removing this “steppingstone” for some of the less experienced users might open more doors, faster, for beginning Nanonote enthusiasts. The instructions at http://en.qi-hardware.com/wiki/Building_OpenWRT_on_Debian_6 are great, but might slightly intimidate less experienced Linux users. They are also slightly daunting to follow if the need arises frequently (if reinstalling OS, royally screwed something up, or other scenarios I’m sure you ran into).

The easiest way to get around this I could come up with was creating a Virtual Appliance which contains the basics for compiling for the Nanonote, using the wiki instructions for Debian Squeeze. Such an appliance can be run in VirtualBox (free and open source) or VMWare Player (free as in beer), even on Windows hosts. The result is a single 2.4 GB file with a ready toolchain which is ready to “accept” package Makefiles and compile them. Debian was installed, the toolchain was compiled, the locales and paths were set. I gave it a quick test compiling Pem (and a load of Perl dependencies) and it seemed to work.

The Virtual Appliance is currently unimaginatively called “Debian Squeeze with OpenWRT-XBurst Development Tools 2011-08-27” and comes as a single .OVA file. See details below:

1. Install VirtualBox.
2. Download Virtual Appliance .OVA file (links below)
3. In VirtualBox click on “Machine” > “Import” and select the .OVA file.

I’ve added a brief section under the Building on … Debian Squeeze wiki page.

Hope someone finds this helpful.

2011-08-27 Release:

Virtual Appliance Download Page on 1fichier.com:  http://4pp1qh.1fichier.com/en/
.OVA file MD5 sum:  3ad6e2aa9379336c10746a3062538d32
user:  build
password:  gongshow
root password:  gongshow
QR Image:

2011-02-23 Release:

Virtual Appliance Download Page on 1fichier.com:  http://0tqstz.1fichier.com/en/
.OVA file MD5 sum:  f9ebe1b0cfe63ae1aa584ddff7b222ed
user:  build
password:  gongshow
root password:  gongshow
QR Image:


— Ernest Kugel

1 Comment

Filed under Ben Nanonote

Monitoring Amazon EC2 instances and other Cloud Resources with Hyperic HQ (and other monitoring platforms)

I’ve had to tackle this task recently and could not find a write-up. Nice folks from Hyperic, and others on Twitter, suggested OpenVPN or an SSH tunnel. I opted for the second option, and after setting up two tunnels and properly configuring the agent, I now have an Amazon EC2 Windows instance show up as a platform in my Dashboard. Note that those instructions will work for other software (Zabbix comes to mind). Here’s how you can have yours too:

1. Install an SSH server on the to-be-monitored cloud instance. For Linux, OpenSSH is easy to install and setup, and usually already comes with most distributions. All you have to do is create a user and a password, or keys. On Windows, CopSSH will do the trick – you just have to add a new user and configure it through the CopSSH control panel. Make sure the SSH server runs, and the login credentials work.

2. Install an SSH client on your Hyperic HQ server. For Linux, again, OpenSSH will do the trick and is most likely already there. For Windows, try CygWIN or PUTTY.

3. Designate a unique name for localhost in the hosts file of both the Hyperic server and the cloud instance. In Linux, it would be under /etc/hosts. In windows, it moves between versions but is usually under C:\Windows\system32\drivers\etc\hosts . Call it cloudagent1. The line should look like this:     localhost cloudagent1

4. From the Hyperic server, initiate an SSH tunnel which forwards two ports. First from the cloud instance to the Hyperic server (usually on port 7443). Second from the Hyperic server to the cloud instance, to the port on which the Hyperic agent runs. If you already have a Hyperic agent on your Hyperic server, you MUST use a different port. As the local agent usually runs on port 2144, you may want to pick something like port 22144. With OpenSSH on CygWin and Linux you can create the tunnels like this (assuming your username is “user” and your cloud instance is “cloud-instance.com”):

$ ssh user@cloud-instance.com -R 7443:cloudagent1:7443 -L 22144:cloudagent1:22144 -N -f

5. Configure the Hyperic agent on your cloud instance to use port 22144. The rest of the settings can be copied from your locally monitored agents. You can use “cloudagent1” (or whichever name you have assigned to the localhost) in the configuration.

Hope this helped!

1 Comment

Filed under #!

Slackware 13.37 and the ASUS PCE-N13 Wireless Adapter


The ASUS PCE-N13 is not especially pretty, but its cheap, fast, and officially supported!

If you are on the market for a wireless adapter for your Linux desktop, the best bang for the buck today seems to be the ASUS PCE-N13. Not only will ~30$ get you a/b/g/n support, 300Mbps transfer rates, 2 antennas and a PCIe bus, but it also says “Linux Support” right on the box, and not in some fine print in an obscure corner. The only card in my local shop to read that, although all of them work just fine. So this is a *moral* choice as well 😉

The card is indeed supported by the rt2860sta module. Unfortunately, with Both Slackware 13.37 and Ubuntu 10.10, the kernel module fails to bind to the card because the various rt2800 and rt2x00 modules conflict with rt2860sta. The module loads, but all attempts to initialize the card result in error messages. To remedy this, simply blacklist the other modules from loading by adding those modules to /etc/modprobe.d/blacklist.conf like this:

# Blacklist rt2800 and rt2x00 modules
# This will allow the rt2860sta module to bind to the ASUS PCE-N13 card:
blacklist rt2800lib
blacklist rt2800pci
blacklist rt2x00lib
blacklist rt2x00pci


Filed under Slackware

The Quest For The Fastest Linux Filesystem

What’s this thing about?

This post has a few main points:

1. Speeding up a filesystem’s performance by setting it up on a tuned RAID0/5 array.

2. Picking the fastest filesystem.

3. The fastest format options for Ext3/4 or XFS filesystems.

4. Tuning an Ext3/4 filesystem’s journal and directory index for speed.

5. Filesystem mount options that increase performance, such as noatime and barrier=0.

6. Setting up LILO to boot from a RAID1 /boot partition.

The title is a bit of an oversimplification 😉 The article is intended to keep being work in progress as “we” learn, and as new faster tools become available. This article is not intended to cover the fastest hardware (yet). The goal is the “fastest” filesystem possible on whatever device you have. Basically, “we” want to setup and tweak whatever is possible to get our IO writes and reads to happen quicker. Which IO reads? random or sequential? long or short? The primary goal is a quick Linux root filesystem, which is slightly different than, lets say, a database-only filesystem, or a /home partition for user files. Oh, and by the way, do not use this on your production machines, people. Seriously.



The first question is, how many devices would you like your filesystem to span? The simple and correct answer is – the more the faster. To use one filesystem across multiple devices, a single “virtual” device can be created from multiple partitions with RAID. (Recently developed filesystems, like BTRFS and ZFS, are capable of splitting themselves intelligently across partitions to optimize performance on their own, without RAID) Linux uses a software RAID tool which comes free with every major distribution – mdadm. Read about mdadm here, and read about using it here. There’s also a quick 10 step guide I wrote here which will give you an idea about the general procedure of setting up a RAID mdadm array.

Plan your array, and then think about it for a while before you execute – you can’t change the array’s geometry (which is the performance sensitive part) after it’s created, and it’s a real pain to migrate a filesystem between arrays. Not to mention a Linux root filesystems.

Deciding on a performance oriented type of RAID ( RAID0 vs. RAID5 )

The rule of thumb is to use 3 or more drives in a RAID5 array to gain redundancy at the cost of a slight performance loss over a RAID0 array (10% CPU load at peak times on my 2.8 GHz AthlonX2 with a 3 disk RAID5 array). If you only have 2 drives, you cannot use RAID5. Whatever your situation is, RAID0 will always be the fastest, but less responsible, choice.

RAID0 provides no redundancy and will fail irrecoverably when one of the drives in the array fails. Some would say you should avoid putting your root filesystem on an un-redundant array, but we’ll do it anyways! RAID0 is, well, the *fastest* (I threw that caution to the wind and I’m typing this from a RAID0 root partition, for what it’s worth). If you are going to be or have been using a RAID0 array, please comment about your experiences. Oh, and do backup often. At least weekly. To an *external* drive. If you only have one drive you can skip to the filesystem tuning part. If you do are going to use RAID0/5, remember to leave room for a RAID1 array, or a regular partition, for /boot. Today, LILO cannot yet boot a RAID0/5 array.

Deciding on a RAID stripe size ( 4 / 8 / 16 / 32 / 64 / 128 / 256 … )

You will need to decide, for both RAID0 and RAID5, about the size of the stripe you will use. See how such decisions affect performance here. I find the best results for my personal desktop to be 32kb chunks. 64 does not feel much different. I would not recommend going below 32 or above 128 for a general desktops root partition. I surf, play games, stream UPnP, run virtual machines, and use a small MySQL database. If I would be doing video editing, for example, a significantly bigger stripe size would be faster. Such specific usage filesystem should be setup for their own need and not on the root filesystem, if possible. Comments?

RAID 5 – deciding on a parity algorithm ( Symmetric vs. Asymmetric )

For RAID5, the parity algorithm can be set to 4 different types. Symmetric-Left, Symmetric-Right, Asymmetric-Left, and Asymmetric-Right. They are explained here, but they appear to only affect performance to a small degree for desktop usage, as one thread summarized.

Creating a RAID0 array

Using the suggestions above, the command to create a 2-disk RAID0 array for a root partition on /dev/md0 using the partitions /dev/sda1 and /dev/sdb1 should look like this:

# mdadm --create /dev/md0 --metadata=0.90 --level=0 --chunk=32 --raid-devices=2 /dev/sd[ab]1

Note the –metadata option, which with 0.90 specifies the older mdadm metadata format. If you will use anything other than 0.90, you will find Lilo failing to boot.

The Fastest Filesystem – Setup and Tuning

Deciding on a Filesystem ( Ext3 vs. Ext4 vs. XFS vs. BTRFS )

The Ext4 filesystem does seem to outperform Ext3, XFS and BTRFS, and it can be optimized for striping on RAID arrays. I recommend Ext4 until BTRFS catches up in performance, becomes compatible with LILO/GRUB, and gets an FSCK tool.

Deciding on a Filesystem Block Size ( 1 vs. 2 vs. 4 )

It is impossible to stress how important this part is. Luckily, if you don’t know what this is and just don’t touch it, most mkfs tools default to the fastest choice – 4kb. Why you would not want to use 1 or 2 is neatly shown in the benchmarking results of RAID performance on those block sizes. Even if you are not using RAID, you will find 4kb blocks to perform faster. Much like the RAID geometry, this is permanent and cannot be changed.

Creating an optimized for RAID Ext4 ( stride and stripe-width )

Use those guidelines to calculate these values:

stride = filesystem block-size / RAID chunk.
stripe-width = stride * number of drives in RAID array ( - for RAID0, and that minus one for RAID5 )

pass the stride and the stripe-width to mkfs.ext4, along with the block size in bytes, like this:

# mkfs.ext4 -b 4096 -E stride=8,stripe-width=16 /dev/md0

A handy tool to calculate those things for you can be found here.

Creating an optimized XFS filesystem ( sunit and swidth )

The XFS options for RAID optimization are sunit and swidth. A good explanation about those two options can be found in this post. A quick and dirty formula to calculate those parameters was taken from here:

sunit = RAID chunk in bytes / 512
swidth = sunit * number of drives in RAID array ( - for RAID0, and that minus one for RAID5 )

The sunit for a 32kb (or 32768 byte) array would be 32768 / 512 = 64

The command to create such a filesystem for a 32kb chunk size RAID0 array with 2 drives and a 4096 (4kb) block size will look something like this:

# mkfs.xfs -b size=4096 -d sunit=64,swidth=128 /dev/md0

Tuning the Ext3 / Ext4 Filesystem ( Journal )

There’s a good explanation about the 3 modes in which a filesystem’s journal can be used on the OpenSUSE Wiki. That same guide will rightly recommend avoiding writing actual data to the journal to improve performance. On a newly created but unmounted filesystem, disable the writing of actual data to the journal:

# tune2fs -O has_journal -o journal_data_writeback /dev/md0

Turning on Ext3 / Ext4 Directory Indexing:

Your filesystem will perform faster if the directories are indexed:

# tune2fs -O dir_index /dev/md0
# e2fsck -D /dev/md0

Filesystem Mounting Options ( noatime, nodiratime, barrier, data and errors options ):

Some options should be passed to the filesystem on mount to increase its performance:

noatime, nodiratime – Do not log access of files and directories.

barrier=0 – Disable barrier sync (Only safe if you can assure uninterrupted power to the drives, such as a UPS battery)

errors=remount-ro – When we have filesystem errors, we should remount our root filesystem readonly (and generally panic).

data=writeback – For Ext3 / Ext4. If your journal is in writeback mode (as we previously advised), set this option.

My fstab looks like this:

/dev/md0         /                ext4        noatime,nodiratime,data=writeback,stripe=16,barrier=0,errors=remount-ro      1   1

And my manual mount command will look like this:

# mount /dev/md0 /mnt -o noatime,nodiratime,data=writeback,stripe=16,barrier=0,errors=remount-ro

Did I mention to NEVER do this on a production machine?

Installing your Linux

Install as usual, but do not format the root partition you’ve setup! If you are using RAID0/5, you have to setup a separate, RAID1 or primary /boot partition. In my experience, the leaving the boot partition unoptimized does not affect regular performance, but if you are keen on shaving a few milliseconds off your boot-time you can go ahead and tune that filesystem yourself as well.

Making sure LILO boots

If you are using RAID0/5 for your root partition, you must setup a separate non-RAID or RAID1 partition as /boot. If you do setup your /boot partition to be on a RAID1 array, you have to make sure to point lilo to the right drive but editing /etc/lilo.conf :

boot = /dev/md1

and make sure LILO knows about the mirroring of the /boot partitions by adding the line:

raid-extra-boot = mbr-only

Then, LILO must be reinstalled to the Master Boot Record while the /boot partition is mounted on the root partition. From a system rescue CD, with a properly edited lilo.conf file this will look something like this:

# mount /dev/md0 /mnt
# mount /dev/md1 /mnt/boot
# /mnt/sbin/lilo -C /mnt/etc/lilo.conf

… and reboot.

Experience and thoughts:

I’ve been following my own advice for the last couple of weeks. The system is stable and best of all, *fast*. May those not be “famous last words”, but I’ll update this post as I go. The only thing we all really need is comments and input. If you use something else that works faster for you – let us know. If something downgraded your stability to the level of Win98, please let us know. More importantly – if you see any errors, you got it – let us know.


Test this interesting post about Aligning Partitions

Test BTRFS on 2 drives without RAID/LVM


Filed under #!