>16tb filesystems on linux
poweredge at antibozo.net
Thu Aug 26 21:14:06 CDT 2010
On 2010-08-26 18:30, Nick Stephens wrote:
> I actually gave that a shot myself but didn't think it was available yet
> due to getting the same error message. Now that I think about it
> though, it could be a different issue I'm encountering.
> [root at localhost ~]# mkfs.ext4dev -T news -m0 -L backup -E
> stride=16,stripe-width=208 /dev/sda1
> mke2fs 1.41.12 (17-May-2010)
> mkfs.ext4dev: Size of device /dev/sda1 too big to be expressed in 32
> using a blocksize of 4096.
Another reason to use LVM: you've put a partition table on your giant
block device. Did you align the start of the first partition with your
RAID stripe size? If not, then many of your filesystem blocks will span
two disks, meaning reading one of those block requires two disks to seek
instead of one. If you make the whole block device an LVM physical
volume instead, you won't have to worry about that (unless you have a
stripe size > 64 kB, and in that case, you can override the default PV
metadata size to make it a multiple of your RAID stripe size).
> The MD1000 is populated with (15) 2TB 7200rpm SAS drives in a RAID-5
> with 1 hotspare (leaving 13 data disks). I know that conventional
> wisdom says that raid5 is a poor choice when you are looking for
> performance, but localized benchmarking has proven that in our scenario
> the total-size gains acquired with the striping outweigh the redundancy
> provided with RAID-10 (since we are unable to get significant
> performance increases).
Consider creating two 7-disk RAID5s instead of a single 14-disk RAID5.
This will double your redundancy, and you can still stripe over all 14
disks using LVM.
In addition, if you use slots 0-6 for one RAID5 and 7-13 for the other,
you can dual-connect the MD1000 and have one SAS channel dedicated to
Or, as others have suggested, consider RAID6.
More information about the Linux-PowerEdge