bugrdiff-backup - Bugs: bug #22183, Feature request - speed...

 
 

bug #22183: Feature request - speed improvements at cost of integrity for large systems

Submitted by:  Guru Evi <guruevi>
Submitted on:  Wed 30 Jan 2008 09:59:49 PM UTC  
 
Category: NoneSeverity: 3 - Normal
Item Group: NoneStatus: Invalid
Privacy: PublicAssigned to: None
Open/Closed: Closed

Add a New Comment(Rich Markup)
   

You are not logged in

Please log in, so followups can be emailed to you.

 

Wed 30 Jan 2008 10:12:26 PM UTC, comment #1:

Hi Guru,

rdiff-backup uses the Rsync algorithm for comparing files, not hashing.

The unstable version of rdiff-backup has a feature to calculate sha1 hashes, but those are only used as a consistency check during the restore operation (not to check if a file has changed).

A metadata file that large probably contains .... a lot of metadata. If you are on a Mac, this would be all of the resource forks as well (thumbnails for each file).

You can open the metadata file to see what it is full of -- it is simply a gzip compressed text file.

As a backup program, I do not think "speed improvements at the cost of integrity" ever makes sense. If that is what you need, then perhaps rdiff-backup is not for you. rdiff-backup tries very, very hard to maintain absolute integrity of the mirror.

Since this is not a bug, I am closing the report. If you want to add a feature request, please put it on the Wiki: http://wiki.rdiff-backup.org

Thanks,
Andrew

Andrew Ferguson <owsla>
Project Administrator
Wed 30 Jan 2008 09:59:49 PM UTC, original submission:

Hi,

I just would like to know if it were possible, to implement a switch somehow that would improve the speed of rdiff-backup significantly by not comparing file-by-file by calculating SHA1 (I believe that is the approach you take for comparing files).

I have rather large binary files (in the order of hundreds of megabytes) or very much small (1000's of 16k files from a medical scanner) on a very large file system (9 terabytes) and just a run through with 6 Gbyte of changed files takes about 72 hours for rdiff-backup and takes up a whole CPU (not really what we like). The metadata file which I believe contains all the SHA1 hashes is a whopping 3 Gigabyte. The files itself however never change, they either get created or removed.

If there were a switch that could do turn off sha1 hashing for something way less expensive (md4) or something else that would save a lot since (depending on your machine) md4 is about 10-200% faster.

If you have additional information or tips (maybe insert it into the documentation) on how to speed up rdiff-backup, I would like to know. I'm going to switch from SSH to straight NFS (currently pushing the ACL for the backup user), I already tried turning off compression but that didn't seem to help a whole lot (you probably do the compression after a complete backup?)

Guru Evi <guruevi>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach File(s):
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -unavailable- added by owsla (Posted a comment)
  • -unavailable- added by guruevi (Submitted the item)
  •  

    Do you think this task is very important?
    If so, you can click here to add your encouragement to it.
    This task has 0 encouragements so far.

    Only logged-in users can vote.

     

    Please enter the title of George Orwell's famous dystopian book (it's a date):

     

     

    Follow 2 latest changes.

    Date Changed By Updated Field Previous Value => Replaced By
    Wed 30 Jan 2008 10:12:26 PM UTCowslaStatusNone=>Invalid
      Open/ClosedOpen=>Closed

    Back to the top


    Powered by Savane 3.1-cleanup1