[BBLISA] Backing up sparse files ... VM's and TrueCrypt ... etc
Edward Ned Harvey
bblisa3 at nedharvey.com
Sun Feb 21 00:12:47 EST 2010
> Sure. Incremental 'dump' sends the whole file, but upon sending it is
> compressed. Being lots of zeros, the portion of the stream containing
> the sparse file should still compress well.
>
> Also, assuming the tape is on a tape server, remember to do the
> compression on the host rather than the tape server. Something like:
>
> dump | gzip |(netconn) dd
>
> Sending just the changed blocks is quite hard, as most filesystems
> don't
> know what blocks have changed. But ideally, the backup system doesn't
> care how the file is represented in blocks, but merely sees streams
> going on and off tape at its preferred blocksize.
So ... again ... yes, compressing a bunch of zeros brings all the zeros down
to near-zero size. But that's not the goal. Allow me to demonstrate:
. 50G sparse file VMWare virtual disk, contains Windows XP
installation, 22G used.
. Back it up once. 22G go across the network. It takes 30 mins.
. Boot into XP, change a 1K file, shutdown. Including random
registry changes and system event logs and other random changes, imagine
that a total of twenty 1k blocks have changed.
. Now do an incremental backup. Sure, you may need to scan the file
looking for which blocks changed, but you can do that as fast as you can
read the whole file once, assuming you kept some sort of checksums from the
previous time. And then just send 20k across the net. This should complete
at least 5x faster than before ... which means at most 6 mins.
. If you do this with tar or dump ... even with compression ...
still 22G goes across the net. Another 30 minute backup.
Is it clear now?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.bblisa.org/pipermail/bblisa/attachments/20100221/6127d63f/attachment.htm
More information about the bblisa
mailing list