[BBLISA] Fileserver opinion
Daniel Feenberg
feenberg at nber.org
Wed Aug 11 17:32:46 EDT 2010
On Wed, 11 Aug 2010, Ian Stokes-Rees wrote:
>
> Diligent readers will recall the thread a few weeks ago on slow disk
> performance with a PATA XRaid system from Apple (HFS, RAID5). Having
> evaluated the situation, we're looking to get a new file server that
> combines some fast disk with some bulk storage. We have a busy web
> server that is mostly occupied with serving static content (read only
> access), some dynamic content (Django portal with mod_python/httpd), and
> then scientific compute users who do lots of writes (including a 100
> core cluster).
>
> We have about a $10k budget (ideally $8k). The current plan looks
> roughly like this:
>
> AMD quad socket MB
> 1x12-core AMD CPU
> 8 GB RAM
> 2x160 GB 7200 RPM SATA drives for system software
> 11x300 GB 15000 RPM SAS2 fast storage (RAID10 + 1 hot swap, 1.5 TB volume)
> 5x2 TB 7200 RPM SATA drives (RAID10 + 1 hot swap, 4 TB volume)
>
> A 3U chassis will be filled, and the 4U chassis will have some empty bays.
>
> We can also upgrade processors and RAM as funds become available and the
> need arises.
>
> This will support a compute cluster (~100 cores), 10-20 users (typically
> 3-4 active), and a busy web server.
>
> Besides the obvious question of whether this setup is sensible/cost
> efficient (mixing two kinds of storage, etc.), the main unknowns we have
> are:
I don't have any advice, but I have a few questions:
0) What is the drive interface card? Several cards? Do they do the RAID
support or is that in software?
1) Is it obvious that one needs 12 cores to fill a single GB ethernet
link? Or are there more ethernet links? If more, how do you balance the
load (not a rhetorical question - we have systems with multiple ethernets
and don't have any idea how to use them effectively).
2) With data spanned over 5 drives, is there still a performance
difference between 7,200 rpm and 15,000 rpm drives? Have you the ability
to experiment before putting out the cash?
3) How long do you think it will take to rebuild a volume after a drive
failure? Do you need such large volumes. If the volumes are smaller the
rebuild times are lower. I have no experience with 15,000 rpm SAS drives,
but is the system really usable during a rebuild? Now if the drives have
500,000 hours mean time to failure, and you have 10, that is still 5 years
mean time to a rebuild, but somehow I don't believe 15,000 rpm drives are
really that durable in real life. Will the system be down a day a year for
RAID rebuilding?
More information about the bblisa
mailing list