[BBLISA] System Backup thoughts and questions...
K. M. Peterson
KMP at KMPeterson.COM
Thu Jan 8 18:11:06 EST 2009
To look at this slightly differently...
You say "simple backups". By that measure, the simplest backup is
replication; I've run rsync against filesystems with 1000000+ files
and it's an excellent solution. Covers you to some degree for things
like hardware errors.
Backups tend to get un-simple when you have more requirements: like,
something got modified last Tuesday. Your Thursday replication
overwrote that file. That's not good.
There's a discussion in Linux Server Hacks (O'Reilly) of a "snapshot-
like" backup that's pretty interesting. I've avoided Amanda, but
implemented Bacula last year and it's a pretty good system but the
overhead is significant if the simple(-er) solutions will work for you.
Tar, Pax, and Dump offer options to do incremental backups. Again,
simpler, and workable if it's just a couple systems. If not:
something giving you better control is going to pay you back.
Hope that helps!
_KMP
On 8 Jan 09, at 16:06 , Richard 'Doc' Kinne wrote:
> Hi Folks:
>
> I'm looking at backups - simple backups right now.
>
> We have a strategy where an old computer is mounted with a large
> external, removable hard drive. Directories - large directories -
> that we have on our other production servers are mounted on this
> small computer via NFS. A cron job then does a simple "cp" from the
> NFS mounted production drive partitions to to the large, external,
> removable hard drive.
>
> I thought it was an elegant solution, myself, except for one small,
> niggling detail.
>
> It doesn't work.
>
> The process doesn't copy all the files. Oh, we're not having a
> problem with file locks, no. When you do a "du -sh <directory>"
> comparison between the /scsi/web directory on the backup drive and
> the production /scsi/web directory the differences measure in the
> GB. For example my production /scsi partition has 62GB on it. The
> most recently done backup has 42GB on it!
>
> What our research found is that the cp command apparently has a
> limit of copying 250,000 inodes. I have image directories on the
> webserver that have 114,000 files so this is the limit I think I'm
> running into.
>
> While I'm looking at solutions like Bacula and Amanda, etc., I'm
> wondering if RSYNCing the files may work. Or will I run into the
> same limitation?
>
> Any thoughts?
> ---
> Richard 'Doc' Kinne, [KQR]
> American Association of Variable Star Observers
> <rkinne @ aavso.org>
>
>
>
> _______________________________________________
> bblisa mailing list
> bblisa at bblisa.org
> http://www.bblisa.org/mailman/listinfo/bblisa
--
K. M. Peterson voice: +1 617 731 6177
Boston, Massachusetts, USA fax: +1 413 638 6486
Full contact information at http://kmpeterson.com/contact.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.bblisa.org/pipermail/bblisa/attachments/20090108/09ffe233/attachment.htm
More information about the bblisa
mailing list