mainstoreBackup - Support: sr #108317, deduplication of buckupdir...

 
 

sr #108317: deduplication of buckupdir (somehow theoretical)

Submitter:  None
Submitted:  Sat 08 Jun 2013 11:36:31 AM UTC
   
 
Category:  None Priority:  5 - Normal
Severity:  3 - Normal Status:  Done
Privacy:  Public Assigned to:  None
Originator Email:  -email is unavailable- Open/Closed:  Open
Operating System:  GNU/Linux
* Mandatory Fields

Add a New Comment Rich Markup
   

Sat 13 Jul 2013 08:13:11 AM UTC, comment #1: 

Sorry, I didn't read this question (for whatever reason).

1.
The reason for your first point why identical files are not recognized is simply that there is no central instance in storeBackup to ask for that. A backup is inconsistent until its ready. The absence of a central daemon is a drawback and an advantage at the same time. Its a design principle of storeBackup that no central process is needed. This makes it possible to use it on any (at least Unix / Linux) (network) device (filesystem).

2.
The second point is your choice. Mostly, it will not give a real benefit to build an index over all files in the backup. Only if you restored lots of data from old backups which didn't exist in the sourceDir any more, storeBackups default configuration may be a "problem". If you read the md5 sums from many backups, it may degrade the performance too much ... - So, if you had the case I described simply change 0:... to all:... temporarily.
The reason for this behavior is, that there is no "database" but simply a flat file with the meta information of all backuped files. This allows everybody to read and use this information without problems (transparent formats).

3.
storeBackup in general doesn't care in any way if the data in the backup is deduplicated. (Exceptions: storeBackupCheckBackup.pl checks for already checked links to avoid unnecessary calculations; linkToDirs.pl does something similar)
storeBackup.pl make a backups and tries to deduplicate. After that task, it's totally irrelevant to storeBackup if identical files are hard linked or just copies in the backup. Deduplication is just a matter of space you use, not a matter of functionality. Therefore linkToDirs.pl can (and may) change the number of hard links to a file: More hard links because if find identical files resulting from what you stated in 1. Less hard links because you copy your backup to a file system supporting less hard links than the old one. (In this case you need one or more additional files to store the contents.)
Btw, the number of hard links always changes if you make a new backup or delete an old one.

You naturally can destroy the consistency of a backup by hard linking to different contents to one file. This can be recognized with storeBackupCheckBackup.pl only.


The design principle behind storeBackup is to use just plain and simple files (exceptions are special files) and the hard link functionality of the file system. All other information is stored in the .md5CheckSum or .md5BlockCheckSum files which are themselves line oriented simple files. Therefore, you can use normal file system command (ls, mv, cp, rm,...) on the backups as long as you do not destroy the consistency of one backup. (You may have to adjust the configurations files after moving backups around.) The exception from this rule is option lateLinks. As long as storeBackupUpdateBackup.pl didn't run, there are relationships between the affected backups stored in the backups (relative paths) which must be fulfilled.

Hope this answers you question,
Heinz-Josef

Heinz-Josef Claes <hjclaes>
Group administrator
Sat 08 Jun 2013 11:36:31 AM UTC, original submission:  

Hi,

this is no problem at all, I am just curious:

You can "provoke" a situation where (at least temporarily) where some identical  files are stored redundantly in backupDir.

1) http://savannah.nongnu.org/support/index.php?108316
[..] You run two different backups into the same backupDir at the same time (different series) but have identical files in the backup. [..]  after some time, the "lonely" hardlink will be removed by deletion of old backups.

2) I suppose during backup the default behavior is to search for same files in the LAST backup of a series (I know it could be configured by [otherBackupSeries] parameter. So if you have a file in backup1 NOT in backup2 and again in backup3 the "new" instance in backup3 is not linked to backup1. (absolutely theoretical)

Now the question: Could you imagine an obvious risk to "deduplicte" (create hardlink) from backup1.file to backup3.file "manually" (analysing .md5CheckSums or using a tool like fslint or fdupes? I suppose that this would be transparent and harmless as far as storebackups "database" is concerned ...


Thanks lopiuh

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by hjclaes (Posted a comment)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follows 1 latest change.

    Date Changed by Updated Field Previous Value => Replaced by
    2013-09-30 hjclaes StatusNone Done

    Back to the top

    Powered by Savane 3.13-d3ae.
    Corresponding source code