<div><div><div><div class="gmail_quote">On Wed, Aug 11, 2010 at 3:06 PM, Bill Bogstad <span dir="ltr"><<a href="mailto:bogstad@pobox.com">bogstad@pobox.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">On Wed, Aug 11, 2010 at 1:55 PM, Ian Stokes-Rees<br>
<<a href="mailto:ijstokes@crystal.harvard.edu">ijstokes@crystal.harvard.edu</a>> wrote:<br>
><br>
> Diligent readers will recall the thread a few weeks ago on slow disk<br>
> performance with a PATA XRaid system from Apple (HFS, RAID5). Having<br>
> evaluated the situation, we're looking to get a new file server that<br>
> combines some fast disk with some bulk storage. We have a busy web<br>
> server that is mostly occupied with serving static content (read only<br>
> access), some dynamic content (Django portal with mod_python/httpd), and<br>
> then scientific compute users who do lots of writes (including a 100<br>
> core cluster).<br>
</div>>...<br>
>[LOTS of details about hardware ideas, etc.]<br>
>...<br>
<div class="im">><br>
> 4. How can we estimate our IOPs and throughput requirements?<br>
<br>
</div>I think this is THE most important question. All the other answers<br>
are completely dependent on this one. You need to attach specific<br>
numbers (with error bars) to current usage as well as estimate future<br>
changes. "busy web server", "some dynamic content", "lots of writes"<br>
are based on experience/context.<br>
<br>
I wish I could give you specific suggestions on tools to gather this<br>
information, but that's going to be very dependent on your situation.<br>
One generic thing, I would suggest is to analyze the log files<br>
for your web server. You want to get an idea on what the "working<br>
set" size is for the web site. If the number is small enough you<br>
should consider memory caching or possibly SSDs in the web server<br>
itself rather then doing something on the file server.<br>
<br>
Good Luck,<br>
Bill Bogstad<br></blockquote><div><br></div>I agree with Bill here. Knowing what your workloads require is the first question that needs to be answered when trying to spec a solution like this. The only real way to get some ideas here is to look at historical information if you have it. Either via RRD graphs from a tool like Cacti, Munin, or Zenoss or from a data collector like sar.<div>
<br></div><div>If you don't have historical data to look at you can use iostat, from the sysstat package, to find out how much data each device has read and written since the last reboot. You can use those values to at least estimate your read/write ratio. You can also run iostat or sar for a while to gather some shorter term min, max, and average figures.</div>
<div><br></div><div>Figuring out how many IOPs your workloads require is a tougher nut to crack. Especially if your existing environment has severe bottle necks. These bottle necks could exist in the network, storage, cpu, etc. So historical data is helpful here as at least you can find averages, max, and min values over a given time frame. And hopefully identify your existing bottle necks clearly so that you know that you're attacking the right problem.</div>
<div>--</div><div>David </div></div></div></div></div>