One week ago, my Linux NAS decided to spam me with mails, while I was sitting in a bar. Nothing is more disturbing, while having a nice chat and drink a beer, than your home server screaming about a failed RAID. I have to admit, I knew the RAID will fail soon, since one of the HDDs already was marked bad by SMART for some weeks…
Anyway, I shut down the server remotely and ordered immediately two new Seagate 4 TB NAS Drives. They arrived soon, but then my struggle begun. I really had no idea how to replace the HDDs in the RAID array and then grow the RAID to the new size of the HDDs, since my old drives where only 2 TB.
After some time of Google research I was aware of the steps to perform the procedure and it’s fairly easy!
(Be aware that identifier in your system might be different! Perform this steps at your own risk and make a backup before you perform anything on your system!)
- Shutdown the server and replace the failed HDD with the first new HDD.
- Now its time to create a new partition on the drive. Since my partition will be greater than 2 TB, I need to use GPT layout on the drive. This can be archived with the parted command.
thomas@omv: parted /dev/sdb GNU Parted 2.3 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel gpt Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? yes
Now I have the GPT in place and can add a partition. I’ll use the whole drive as one big partition and use ext4:
(parted) mkpart ext4 0% 100% (parted) print Model: ATA ST4000VN000-1H41 (scsi) Disk /dev/sdb: 4001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 4001GB 4001GB ext4 (parted) quit
- Add the new HDD to the RAID array and wait for the re-sync. This can take a long time, in my case 3.5 h.
thomas@omv: mdadm /dev/md0 --add /dev/sdb1 thomas@omv: cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[3] sda1[2] 1953382208 blocks super 1.2 [2/1] [_U] [=======>.............] recovery = 35.2% (688880832/1953382208) finish=120.2min speed=175217K/sec unused devices: <none>
- After the re-sync, remove the last old HDD with the following commands:
thomas@omv: mdadm /dev/md0 -f /dev/sda1 mdadm: set /dev/sda1 faulty in /dev/md0 thomas@omv: mdadm /dev/md0 -r /dev/sda1 mdadm: hot removed /dev/sda1 from /dev/md0
- Now shutdown the system again and replace the second old HDD with a new one.
- Repeat the steps 2 and 3 on the new drive.
- Now it’s time to grow the array. Let’s have a look first:
thomas@omv: cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] sdb1[3] 1953382208 blocks super 1.2 [2/2] [UU] unused devices: <none>
Ok, looks great. Notice the line 4 where the block number is displayed. 1953382208 1024K blocks equal to 2 TB or 1.82 TiB. Now let’s gow the array with mdadm:
thomas@omv: mdadm --grow /dev/md0 --size=max mdadm: component size of /dev/md0 has been set to 3906886471K unfreeze thomas@omv: cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] sdb1[3] 3906886471 blocks super 1.2 [2/2] [UU] [============>........] resync = 58.2% (2273807926/3906886471) finish=190.3min speed=128079K/sec unused devices: <none>
- So, now the virtual disk is bigger, but the partition is still 2 TB. So we have to grow this as well:
thomas@omv: parted /dev/md0 GNU Parted 3.2 Using /dev/md0 Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: Linux Software RAID Array (md) Disk /dev/md0: 4001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) resizepart 1 100% (parted) print Model: Linux Software RAID Array (md) Disk /dev/md0: 4001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 4001GB 4001GB ext4 (parted) quit
- And we are done!
I repeated some steps in a virtual machine for this blog post, so if you find some mismatches in the sizes etc. its most likely from that. Feel free to drop a comment for errata requests! 🙂