Skip to content

Instantly share code, notes, and snippets.

@tpowellcio
Created May 24, 2016 14:29
Show Gist options
  • Save tpowellcio/f60c3ab1b0776b9dbe9c919b092e3eb8 to your computer and use it in GitHub Desktop.
Save tpowellcio/f60c3ab1b0776b9dbe9c919b092e3eb8 to your computer and use it in GitHub Desktop.
Setup Software Raid 5

#Software Raid

  • Installed 4 - 4TB drives onto hagrid using the motherboard SATA ports
    • Set drives to software raid 5 and initialising
      • RAID-5 has become extremely popular among Internet and e-commerce companies because it allows administrators to achieve a safe level of fault-tolerance without sacrificing the tremendous amount of disk space necessary in a RAID-1 configuration or suffering the bottleneck inherent in RAID-4. RAID-5 is especially useful in production environments where data is replicated across multiple servers, shifting the internal need for disk redundancy partially away from a single machine. RAID level 5 can replace a failed drive with a new drive without user intervention. This functionality, known as Hot-spares. Also supports Hot-Swap, Hot-swap is the ability to removed a failed drive from a running system so that it can be replaced with a new working drive. This means drive replacement can occur without a reboot. Hot-swap is useful in two situations. First, you might not have enough space in your cases to support extra disks for the Hot-Spare feature. So when a disk failure occurs, you may want to immediately replace the failed drive in order to bring the array out of degraded mode and begin reconstruction. Second, although you might have hot-spares in a system, it is useful to replace the failed disk with a new hot-spare in anticipation of future failures.
        1. Here first we will check all the detected HDD's, as you can see in the below snap 1st three HDD's (/dev/sdb, sdc and sdd) we will use to make RAID 5 (/dev/md0) partition.

        2. Now we will make a RAID partition on each three drives (sdb, sdc, sdd, sde) one by one, here we go with "/dev/sdb" first,

          # parted /dev/sdb
          # (parted) mklabel gpt
          # (parted) unit TB
          # (parted) mkpart primary 0.00TB 3.00TB
          # (parted) print
          # (parted) quit
          
        3. Do same with other two HDD's i.e. /dev/sdc & /dev/sdd . After you complete the process it'll look like this as below,

        4. Next we will implement RAID 5 and make a software RAID 5 device "/dev/md0"

           # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
          

          Note : To check /dev/md0 status,

          # mdadm --detail /dev/md0
          

          Note : As in above snap wait till Rebuild status reaches to 100%. To check it continuously without executing command again and again do as,

          # watch -n 10 mdadm --detail /dev/md0
          

          Here in next snap, if you check after it reaches to 100% it should show "Status - Clean" and below all the three disks should be in "active - Sync".

        5. Now you have two ways to access it, either you implement LVM on it or directly access it by formatting it and mounting it on a mount point. We are formatting.

          # mkfs -t ext4 /dev/md0
          
        6. Then to mount "/dev/md0" we will make a new folder "/Raid5_Data",

          # mkdir /data/Raid5
          
          # mount /dev/md0 /data/Raid5  (Temp Mount)
          
        7. To make it permanent edit "/etc/fstab" file and add below line,

          /dev/md0 /data/Raid5 ext4 defaults 1 2
          

          Note : Dont make any other modification, else if you reboot next time it will go to emergency mode.

          To check if you edited it properly, execute "mount -a" if it shows any error make it correct. REF: Raid on CentOS, CentOS 6.4 software raid & LVM, CentOS 6.4 software raid & LVM LVM REF: Implement LVM over RAID5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment