Recent articles (showing 1-10 out of 69):
WARNING: This post has been marked as obsolete and may be incorrect. It is kept for archival purposes only.
Ok, FreeBSD still lacks a decent RAID5 implementation within its core system (some people use the geom_raid5 3rd party module that you can find in freenas) – but with ZFS moved into production status in freebsd 8 now we can use this.
ZFS supports various raid levels. We will use RAID5 in this example – I'll explain how to use RAID6 later in the article.
Ok, for my example I will use 6 x 2TB hard drives freshly installed in my system (listed as ad10 ad12 ad14 ad16 ad18 ad20 in dmesg) to generate a RAID5 raid set, giving 5 x 2TB of usable space and capable of a single disk failure without loss of data. Remember, you need a minimum of 3 disks to do RAID5, and you get N-1 capacity (N-2 for RAID6)
First, we need to load ZFS into the system... add the following into your /boot/loader.conf:
vfs.zfs.prefetch_disable="1"
zfs_load="YES" Copy
This will cause ZFS to load in the kernel during each boot. The prefetch_disable is set by default on servers with less than 4GB of ram, but it's safe to add it anyway. I've found this produces far more stable results in live systems so go with it 😉
Next, add the following into your /etc/rc.conf file:
zfs_enable="YES" Copy
This will re-mount any ZFS filesystems on every boot, and setup any necessary settings on each boot.
Now, we will add all 6 disks into a raid5 set called 'datastore' – run the following as root:
zpool create datastore raidz ad10 ad12 ad14 ad16 ad18 ad20 Copy
'raidz' is ZFS's name for RAID5 – to do RAID6 you would use 'raidz2' instead. You can confirm the command was successful with zpool status as follows:
pool: datastore
state: ONLINE
scrub: none
config:
NAME STATE READ WRITE CKSUM
datastore ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad10 ONLINE 0 0 0
ad12 ONLINE 0 0 0
ad14 ONLINE 0 0 0
ad16 ONLINE 0 0 0
ad18 ONLINE 0 0 0
ad20 ONLINE 0 0 0
errors: No known data errors Copy
This shows the raid set is online and healthy. When there are problems, it will drop to DEGRADED state. If you have too many disk failures, it will show FAULTED and the entire array is lost (in our example we would need to lose 2 disks to cause this, or 3 in a RAID6 setup)
Now we will set the pool to auto-recover when a disk is replaced, run the following as root:
zpool set autoreplace=on datastore Copy
This will cause the array to auto-readd when you replace a disk in the same physical location (e.g. if ad16 fails and you replace it with a new disk, it will re-add the disk to the pool)
You will now notice that you have a /datastore folder with the entire storage available to it. you can confirm this with zfs list as follows:
NAME USED AVAIL REFER MOUNTPOINT
datastore 2.63T 6.26T 29.9K /datastore Copy
You now have a working RAID5 (or RAID6) software raid setup in FreeBSD.
Generally to setup RAID6 instead of RAID5 you replace the word raidz with raidz2. RAID5 allows for a single disk failure without data loss, RAID6 allows for a double disk failure without data loss.
After a disk failure, run zpool status to ensure the state is set to ONLINE for all the disks in the array then run the command zpool scrub datastore to make zfs rebuild the array. Rebuilding takes time (it rebuilds based on used data so the more full your array the longer the rebuild time!) – once it's completed the scrub or "resilver" process, your array will return back to ONLINE status and be fully protected against disk failures once again.
As this process can take (literally) hours to complete some people prefer a RAID6 setup to allow for a 2nd disk failure during those few hours. This is a decision you should make based on the importance of the data you will store on the array!