[BBLISA] Looking for possible options to NetApp storage
John Stoffel
john at stoffel.org
Fri Feb 4 16:53:12 EST 2011
>>>>> "Daniel" == Daniel Feenberg <feenberg at nber.org> writes:
Daniel> On Tue, 1 Feb 2011, Edward Ned Harvey wrote:
>>> From: bblisa-bounces at bblisa.org [mailto:bblisa-bounces at bblisa.org] On
>>> Behalf Of Scott Ehrlich
>>>
>>> Now, say you have available shelving or drive bays. NetApp
>>> shelves/storage additions cost a lot of money. Say you want to add
>>> 40 - 200+ TB. Say you have a need to add more (multi terabyte) disk
>>> storage, be it from NetApp oir someone else.
I'm late to this discussion, but I've been looking at Isilon lately
(just bought by EMC, which is probably a negative...) just to keep my
options open. They really do look to have a great system with an easy
way to expand performance and capacity. Pricewise.... they're
probably not that cheap.
Now for a semi-rant about any and all disk storage systems:
The big issues with any system like this is in the details. Not so
much in terms of performance or capacity (thought Netapps 16Tb limit
on Aggregates and consequent limit of 12Tb for a volume sucks!) but in
terms of manageability and backups. So here are some of my pets
peeves and detail problem.
1. Find disk space hogs
My users generate lots of data, and they don't always notice when they
have a runaway job which creates a 300gb+ file. Or larger. So I've
got scripts to find the top-n-sized files and nag them. Works well,
but it's *slow*. I wish ZFS/Wafl/ext4/btrfs/xfs/etc... would have
support for logging the 1024 largest files in a filesystem, so it
would be simple to know and find these files. It's not like it would
be a large burden to keep upto date either.
The other tool I run is 'philesight', though I've customized it for my
site a bit and haven't pushed my changes back to the author. Feel
free to nudge me. It's found at http://zevv.nl/play/code/philesight/
and is a very nice way to graphicalls show where disk space is used.
I track filesystems at four sites across something like 50Tb of disk.
As you can imagine, the script takes quite a while to generate the
data and isn't all that fast. But it's a critical way for myself and
my users to target our efforts for space reclamation.
Having better support from the Filesystem for these features would be
ideal!
2. Backups. Yes, I know about Snapmirror from the Netapp side and
ZFS send from the Solaris/ZFS camp, but they don't meet my needs,
which is to write to tape and send data offsite for various levels of
retention citeria. Then it all goes to hell in a handbasket when
$WORK gets sued and we have to stop recycling media due to discovery
needs. This is a *huge* pain. And one which people don't address
well.
Single file or directory restores from snapshots are awesome, can't
live without them. Being able to send snapshots across the WAN to the
DR (Disaster Recovery) site is also a god-send. Until I have users
create 1Tb of junk data, then they delete it all along with some other
data. Suddenly I have 2Tb of changes to ship across the WAN and my DR
window gets blown to hell. I haven't found any good solution for
this.
So having the ability (in terms of Isilon and Netapp from what I know)
to be able to use NDMP to write backups to tape is a key feature. And
having it work well and fast with your chosen backup software is also
key. And another area where you can spend a ton of money getting
things right. Because NO ONE cares about backups. All they care
about is RESTORES. Data is never important until it's gone. Can't
count the number of times I've had users do a cleanup because we're at
99% full, then they come back the following week asking for a retore
of data the got rid of by accident. Never mind that they had not
touched it in six months or longer. Today it was needed. Sigh...
Anyway, there's lots of issues to think about when you're building
storage systems. And it's a rats nest to go down. And people hate to
spend the money until they lose some data and then suddenly the
spigots open and money flows like water to get the data back. Funny
ain't it?
John
More information about the bblisa
mailing list