RAID for Everyone – Faster Linux with mdadm and XFS

I crashed my Linux. It took a lot of skill and root access, but I’ve accidentally hosed my desktop and backtracking will be more time consuming than running through a quick Slackware install. If you find yourself in this situation, and have more than one drive on your machine, it makes sense to RAID the drives. RAID will either greatly increase performance for the drives, which are the bottleneck of any desktop, or mirror the drives for disk failure protection. To read more about RAID, which becomes more and more popular, try The Linux Software RAID How To.

This quick how-to will try to cover the basics, but all the basics, needed to install any Linux Desktop distribution on any machine with 2 or more drives. It begins with installing a Linux system on a RAID1 partition, and continues with adding a RAID0 home partition after the install. For the home partition, XFS will be used as a file-system, and tweaked to illustrate some of its strengths with RAID. Finally, It’ll cover replacing a failed drive in an array. Any bits of it will try to be relevant to other scenarios. Mostly, it will attempt to demonstrate how simple it is to administer RAID arrays with mdadm.

Why software RAID (mdadm)? Chances are, your motherboard already comes with an on-board RAID controller. Those are present on motherboards as cheap as 60$. I won’t be using mine, however, and this tutorial will not cover that part. I had the most miserable experience with my ATI on-board RAID, which is a propriety chipset, working out of the box only with SuSE, and failing drives left, right and center. Even if your lucky to have a decent Linux supported controller, you will still have a hard time finding a decent interface for the firmware, the options will be lacking at best, and you will not find RAID 5 and 6 options on motherboards or low end cards. You will also have no cheap way to recover data from failed controllers, spare buying the same hardware again. Propriety software cards are not even worth mentioning. Since the CPU penalty for software RAID is fairly low on modern chips, and all Linux distributions supports mdadm out of the box, that’s what I recommend.

Why Slackware? I’ll be using Slackware 13 because that’s what I like, and because Slackware install CD gives the most partitioning freedom (read: a console with all the console-tools) before the install. But this will work on anything Linux. Here it goes:

1.  What you need:

Get a Slackware CD/DVD ready, or any Linux installation CD from which you can access a console before the installation starts. Live CDs are great! Backup your data. No, really, do it now. And we assume you have at least 2 drives installed. The size does not have to be identical, but the partition layout will be limited to that possible on the smallest of the drives.

2. BIOS:

Reboot with the Linux installation CD in the drive. In your BIOS, make sure the RAID controller is off (and that you can boot from CD/DVD).

3. Partition Drives:

Boot the CD/DVD and get to a console. Slackware just takes you to one on its own, from a Debian DVD one can access an alternative console with Alt+F2, etc, and in a LiveCD there’s probably a terminal program. Pull one up. Log in as root, or do everything with $ sudo. Here goes the destructive part. Identify the drives you’ll use with

# fdisk -l

This examples uses /dev/sda and /dev/sdb. We also assume you have no active RAID arrays on the drives you manipulate. If you do, stop them:

# mdadm --manage --stop /dev/md0

Create parition tables on drives:

# cfdisk /dev/sda

delete all partitions. Create a Swap type partition (one of two) and a Linux Raid Autodetect type partition. Write and exit. repeat on other drive(s) with the same sizes for both partitions. Note that you do not have to use the entire space right away. In this case, we will setup a root file system on a mirrored partition, for redundancy and ease with bootloaders as most recommend. Only later we’ll attach a striped (for size and speed) home partition.

4. RAID the drives:

mdadm is our weapon of choice. It’s mighty but simple. Here’s a RAID1 (mirrored) device /dev/md0, using /dev/sda2 and /dev/sdb2 (assuming /dev/sda1 and sdb1 were used for swap):

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2

Now, RAID1 drives will take some time to rebuild (sync the mirror), depending on your partition size. I’ve seen 20GB partitions rebuilt in 15 minutes and 500GB partitions go for almost 2 hours over 2 7,200 RPM SATA drives. You can tell what the status is by glancing at /proc/mdstat:

# cat /proc/mdstat

Wait for the array to rebuild before proceeding. Many corners of the internet, including this one in the past, suggested you had to wait for an mdadm array to rebuild before using it. However logical this sounds, in reality, “the reconstruction process is transparent, so you can actually use the device even though the mirror is currently under reconstruction”, and we can move right along regardless of the type of RAID we chose:

5. Start Linux install and choose /dev/md0 as your root partition. Install the OS.

6. Setting up a RAID home partition, or any other partition, is not much more complicated. We’ll use RAID 0 for home, because of the volume it provides, as well as the speed performance. We’ll be using /dev/sda3 and /dev/sdb3 for a mirrored RAID. So head over to the terminal:

# cfdisk /dev/sda3

partition as you wish, type Linux RAID Autodetect, and repeat with /dev/sdb3. Make sure the partition sizes match.

7. Set up RAID:

This is a simple striped setup across 2 partitions

# mdadm --create /dev/md1 --level=0 --chunk=256 --raid-devices=2 /dev/sda3 /dev/sdb3

here, we had to specify the RAID 0 chunk, which is the stripe size, in addition to the options we used with RAID 1. The optimal chunk size depends entirely on your primary usage.

8. Setup a file-system:

You can use anything and just skip this section, but XFS has special tweaking for RAID, and its worth taking advantage of them for performance. XFS allows specifying the partition RAID dimensions to the file-system, and takes them into consideration with file reads/writes, to match the operations. 2 parametrs are used with XFS upon creation and mounting: sunit, which is the size of each chunk in 512byte blocks, and swidth, which is sunit * amount-of-drives (…for RAID 0 and 1; that minus 1 for RAID 5; or that minus 2 for RAID 6). More about RAID and XFS can be found here. To create a matching XFS file-system:

# mkfs.xfs -d sunit=512, swidth=1024 /dev/md1

9. Move home to its new home.

To quickly move the contents of the old /home directory to the new RAID partition, simply rename the old home, create a new home, mount, and copy stuff over. We’ll put an entry in fstab to mount the file-system properly, and with no access time logging, to get the performance boost. All of this must to be done as root with all other users logged out. (If home was on a separate partition already, you must unmount it and remount it as something else rather than moving it):

# mv /home /home.old
# mkdir /home
# echo "/dev/md0     /home     xfs     noatime,sunit=512,swidth=1024    0    2" >> /etc/fstab
# mount /home
# cp -aR --preserve=all /home.old/* /home
# rm -rf /home.old

10. Thats it for the setup. Now, lets give our new RAID array a real test drive.

We can check the status of all our arrays with:

# cat /proc/mdstat

We can monitor RAID1 arrays (but not RAID0) with:

# mdadm --monitor --oneshot /dev/md0

But the most rewarding bit will be performing some speed tests with hdparm. Lets check on a speed performance of a single drive:

# hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads:  366 MB in  3.01 seconds = 121.45 MB/sec

Compare this to the speed of our RAID 0 array:

# hdparm -t /dev/md1
/dev/md1:
 Timing buffered disk reads:  622 MB in  3.01 seconds = 206.71 MB/sec

Yup, that’s right folks — the speed on a 2 drive RAID 0 array is twice as high. That being expected, it is by no means less satisfying 😉

Bonus: Recovering a failed drive from a RAID1 array. This will be handy for your root partition. Needless to say, you will not be able to recover anything from your RAID0, because it has 0 redundancy. With RAID1, however, the machine just keeps humming along after a drive gave up. How will you know you have a failed drive then? If the drive failed partially (repeatedly failing on some seeks but not on all), you will notice your performance degrade. You can test for performance degradation even before it becomes severe, with hdparm as explained above. If the drive failed totally, you might not notice.  It’s good to occasionally peak at /proc/mdstat to see that the array is up. In this case, the fix is easy – just pop in a new drive when the system is off. However, if you have a partially failed drive in a RAID1 array and you do not wish to wait for a reboot (very reasonable on a server that keeps working, if you could just avoid the horrible seek delays for the failing drive), you could drop it yourself in 2 commands.

My /dev/md0 is a RAID1 array which is made of /dev/sda2 and /dev/sdb2.  In my case, it was easy to see the drive  access light throwing fits, and the desktop freezing occasionally, indicating a problem with the drives. A quick run of hdparm revealed that /dev/sdb was the failing drive, as it displayed much slower reads. It caused the file-system on /dev/sdb2 to be barley accessible, which slowed my RAID1 array during writes (reads were still fast because they could be completed from the good drive alone, but writes needed to happen on both drives). So as soon as I got my desktop back from an occasional freeze, I fired up a terminal, marked the drive in the array as failed, and removed it from the array:

# mdadm --manage /dev/md0 --fail /dev/sdb2
# mdadm --manage /dev/md0 --remove /dev/sdb2

Past that point, its just a matter of powering off and replacing the drive at your earliest convenience. Once you got a new drive, pop it in, boot the system up, clone the partition table, and add the new partition to the array:

# sfdisk -d /dev/sda | sfdisk /dev/sdb
# mdadm --manage /dev/md0 --add /dev/sdb2

… watch the array rebuild itself by looking at /proc/mdstat , and you’re done. Phew. 🙂

I hope the minimal amount of code and steps will demonstrate how easy it is for any person with 2 hard drives to enjoy the benefits of RAID, which will make your Linux Desktop even faster/safer, without investing any significant amount of time or money.

Advertisements

7 Comments

Filed under #!

7 responses to “RAID for Everyone – Faster Linux with mdadm and XFS

  1. Hi, I recently wrote two articles about RAID for the end-users that you might have a look to.

    The first one is about deciding which storage layout should be used according to your workload:

    http://www.vigneras.name/pierre/wp/2009/07/21/choosing-the-right-file-system-layout-under-linux/

    The other one is about a new layout for storage devices that I call PROUHD and that has been specifically designed for the end user. Of course, the article is about the layout general algorithm. The proper end-user tool has to be developed.

    http://www.vigneras.name/pierre/wp/2010/04/14/prouhd-raid-for-the-end-user/

    Both articles might be of some interests to your readers and to yourself.

    Regards.

  2. thanks a lot, you have a reference to guide me, this is my personal experience on the internet, thank you …………… please visit my web also,

  3. I found this somewhat helpful. One step to take after making a RAID device that has more than 2TB: create a GUID partition table (GPT) for it. I used “gdisk” to accomplish this. Also didn’t need to specify parameters for XFS: past a certain size, it defines its own limits.

    http://www.rodsbooks.com/gdisk/

    • Ernest Kugel

      There’s a more updated post on this blog regarding “setting up the fastest filesystem” which covers XFS and EXT4 in more detail. gdisk is neat, haven’t tried it yet but looks great.

  4. Your entire blog, “RAID for Everyone – Faster Linux with mdadm
    and XFS | Tricks and Ticks” was worthy of commenting down here in the
    comment section! Simply needed to say you actually did a wonderful
    work. Thanks a lot -Chloe

  5. Thanks for your time for composing “RAID for Everyone – Faster Linux with mdadm
    and XFS | Tricks and Ticks”. I reallymay
    certainly be back for even more reading and writing comments
    in the near future. Thank you, Clark

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s