Ceci est une ancienne révision du document !
When FCM was asked about RAID, it seemed like a good idea for me to finally implement RAID at home. Since I have a fair amount of access to different hardware, including lots of hard drives, the others at FCM agreed to let me write an article on RAID – despite having never created a RAID array before. I’m far from being a RAID expert, though I did talk to several people who’ve created RAID arrays before writing Part 1 (FCM 80).
As you’ll see I managed to create the RAID 10 array (mirroring and striping) that I talked about in Part 1. But when I went to test the array by removing the drive, it degraded and I was unable to restore the array before being redirected to another screen and then to a grub prompt.
When I created my original RAID 10 array, I used 4 hard drives (each a 250GB SATA hard drive). The total array size was 500GB since 2 drives are striped together (500GB), and they got mirrored to the other 2 drives. For this article, I’m using screen shots from a RAID 10 array I created using Virtual Box.
When I started to set up the array, I was stuck because I kept booting Live CDs and starting a graphical install. The problem with graphical installs is that they don’t seem to have a RAID option. Even after I installed mdadm and other RAID tools, no RAID options appeared the the graphical drive configuration screen. Both the text and graphical installers let you choose to manually partition your hard drive(s), but the text installer has extra tools so you can easily set up RAID arrays.
Once you get to the hard drive configuration stage, be sure to choose Manual install instead of Guided - use entire disk.
Because all of the drives are fresh without any previous installation, we need to create a partition table for each individual drive. Choose each drive and hit enter. Once you select a drive you’ll be prompted to Create a new empty partition table on the device? Choose Yes. Note that you will have to repeat this process for each drive in the array if it has never been initialized before. At this point, I set up a swap partition on each of the 4 drives (this may be part of the reason my array ultimately failed when I removed a drive).
On the remaining space for each drive, I created a physical volume for RAID partition. To do this, select the FREE SPACE partition, then choose Create a new partition. Select Continue if you’re satisfied with the size. For the Type of partition, be sure to use Primary partition for the partition you’re going to use as RAID. On the screen where it asks Use as, the default is Ext4 journaling file system; change this to physical volume for RAID.
When all drives are set up navigate to the Configure Software RAID option just under Guided partitioning. You’re given one last chance to make changes to your configuration at this point. Choose Yes to the ‘Write the changes to the storage devices and configure RAID’ question if you’re happy with your drive layout.
The next step is to Create a MD device (multiple device). As I understand it, if you already have Linux installed and configured on a system you can use mdadm to do the same steps from here on. Finally we’re given a choice of the type of RAID we want. In a 4 drive configuration I had the choice of RAID 0, 1, 5, 6, or 10. I chose RAID 10. The next step is another point where I got really confused because, as I understand RAID 10, I should have had 500GB (2 x 250GB) in my original configuration but when I chose to use 2 drives as active and 2 drives as spare, I had only 250GB available. I was thinking the 2 spares might be the mirrors in the array which appears to be an incorrect assumption. I ended up choosing all 4 drives as active, and 0 as spares, to make the 500GB RAID partition available.
(Thanks to Mion and koala_man from the #linux channel on the freenode IRC network for confirming that setting 4 drives as active and 0 as spares was the right choice.) Select Continue when you’re done selecting the active partitions. Note: in my screenshot, I have both swap and RAID partitions.
At this point, the next menu that appears is the ‘Create a MD device’ menu we were at earlier. Select Finish to move on from the ‘Create a MD device’ menu. You’ll see the ‘starting up the partitioner’ screen as Kubuntu tries to figure out our newly created MD RAID device.
Now the partition disks menu shows a RAID 10 device. In my virtual machine example’ that device is 8.7GB (which makes sense since 2 x 4.367GB = 8.734GB). We have 2 striped drives while the other 2 mirror the striped array.
We’re now editing partition #1 of RAID10 device 0. By default the partition is set as Use as: do not use, change this to the Ext4 journaling file system and set the mount point to / - the root file system then select Done setting up the partition.
We’re almost done setting up the RAID10 array. On the next screen select ‘Finish partitioning and write changes to disk’. We have one last chance to make changes before everything gets written to disk. Once you Write the changes to disk, your partitioning is done and your regular Linux install continues.
I was excited to see Kubuntu boot up after setting up the RAID10 array for the first time. Then I removed a physical drive and the following message appeared:
WARNING: There appears to be one or more degraded RAID devices * … Do you wish to start the degraded RAID?
I had about enough time to snap a picture of the screen before it seemed to automatically choose and fail on me. At that point, I shut down the machine and added the drive back that I just took out, sadly all I got was a grub prompt.
It looks like I had the chance to perhaps fix the array at the degraded screen, but it passed by without a response from me, so I never had the chance to choose to fix the array. I mentioned earlier that I’m not experienced with RAID (but having access to drives affords this experience). So I’m calling on experts out there to help this article along for next month. Can this be fixed from the grub prompt?