Hi
> > > I would suggest striping the disks at an OS level... I tried it the
> What stripe size do you guys use? I'm gathering myself up to do this to
> the two 9GB drives on our Squid server, and I'm not sure whether the
> default 1MB size would be the optimal configuration.
umm - no, as I see it.
We stripe using linux, with the default block size, though here is some
mail which talks about the whole thing (I didn't apply this, since I saw
it after the disks were striped, and we aren't throttling with disk-IO
1M is going to be very inefficient, if it means that it will read 1M for
every 8k block you need... I would guess not, and it simply means that
it will stripe it into 1M stripes, which is basically irrelevant.
---------------------------
From: "Leonard N. Zubkoff" <lnz@dandelion.com>
RAID-0 is stable. I would recommend testing an ext2 filesystem with
a 4kB block size, rather than the default 1kB, and with RAID chunk sizes
of 4kB to 32kB.
I definitely concur that the 4KB block size is critical to getting maximum
sequential I/O performance, even for a single SCSI disk. You should also be
aware of the -R option to mke2fs 1.10. It is designed to inform mke2fs of the
stripe width so that the file system metadata (block and inode bitmaps) does
not all end up on a single disk, thereby creating an unbalanced I/O load. Ted
put this is in response to my noticing the problem about unbalanced I/O and
complaining to him about it. You also want to use mke2fs 1.10 because for 4KB
file systems it will default to 32768 blocks per group rather than the 8192
blocks per group earlier versions allowed. For example, to build a 4KB file
system with a raid chunk size of 64KB, you want to use the command
mke2fs -b 4096 -R stride=16
The stride=16 parameter informs mke2fs that the stripe width (chunk size) is
64KB = 4KB * 16.
I've been doing some experiments on I/O performance using two or three striped
Quantum Atlas II Wide 2GB drives. If there is interest in the results, I'll be
happy to share them here. I'm testing both with 2.0.30 and using a modified
fs/buffer.c from 2.0.29. There have been so many proposed patches for 2.0.31
that I don't know where to begin, so I want to see first if there's much
difference between these two versions.
Received on Mon Sep 15 1997 - 11:54:10 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:37:06 MST