mainstoreBackup - Support: sr #108204, missing files in previous backup...

 
 

sr #108204: missing files in previous backup not detected in new backup

Submitter:  Hajo Koehler <uf9>
Submitted:  Tue 11 Dec 2012 03:45:24 PM UTC
   
 
Category:  None Priority:  5 - Normal
Severity:  3 - Normal Status:  Done
Privacy:  Public Assigned to:  None
Open/Closed:  Open Operating System:  None
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Sat 26 Jan 2013 04:05:49 PM UTC, comment #11: 

Hi,

you're right. There is a bug with the hard disk limit. I fixed this and made a lot of tests (there are many variants in the code in this area: block size, compression (yes, no, check), lateLinks, lateCompress, changes from backup to backup, ...).

I'll send you the corrected version for testing.

Regards, Heinz-Josef

Heinz-Josef Claes <hjclaes>
Group administrator
Thu 24 Jan 2013 01:54:41 PM UTC, comment #10: 

Hi Heinz-Josef,

thanks for your quick reply!

I think this is a problem with too much hard links (and maybe compression). In a few versions there are this empty blocks (with 32000 hard links).

In backups with missing empty blocks there is one zero file without compression ("0000000005" instead of "0000000005.bz2") and no hard links to it.

To fix the backup I copied this small missing files:
-------------------------
#!/bin/bash
for v in $(bzcat .md5BlockCheckSums.bz2|grep f1c9645dbc14efddc7d8a322685f26eb|awk '{print $3}')
do
  if [ -f $v ]
    then
      echo "file exists"
    else
      cp -i empty.bz2 $v
  fi
done
-------------------------

storeBackupRecover.pl is running now..

If you have a new version to test it would be nice if you could send me this:
-email is unavailable-

Thanks a lot!
esco

esco <esco>
Wed 23 Jan 2013 06:42:08 PM UTC, comment #9: 

Hi esco,

If you look in the ChangeLog of version 3.2.1, you see:

--------------
backup of a blocked file (or device) didn't store all md5sums
for all blocks in the local .md5CheckSum file if two or more
block in one blocked file were identical
this means it is possible to restore the data with cat or bzcat,
but not with storeBackupRecover.pl !
--------------

I think it's a consequence from that bug and the "bug" I found that missing blocks are not written newly!? At that time, after finding the bug, I corrected those bugs (manually) and didn't hat new ones since then.

If you want, please write me an email; then I can send you the version which might be able to correct backup of (in source) "touched" blocked file - but only if you don't use lateLinks for that particular backup. I wrote might because I didn't test all possibilities up to now (compression, different file sizes, different compression algorithms, changes from previous backups, ..?).
Running this version will not destroy anything, just create a new (hopefully) corrected backup


Regards, Heinz-Josef

Heinz-Josef Claes <hjclaes>
Group administrator
Wed 23 Jan 2013 10:25:39 AM UTC, comment #8: 

P.S. to the last entry

#bzcat .md5BlockCheckSums.bz2 |wc -l
15361

In this case there are only 6837 out of 15360 blocks in the backup. But it looks like that only empty blocks are missing..

esco

esco <esco>
Wed 23 Jan 2013 10:16:18 AM UTC, comment #7: 

Hello again,

more details:

#bzcat .md5BlockCheckSums.bz2 |awk '{ print $1}'|sort|uniq|wc -l
6838
(the first line is "#")

#ls <$id>_snapshot/|wc -l
6837

#bzcat .md5BlockCheckSums.bz2 |grep -v f1c9645dbc14efddc7d8a322685f26eb|wc -l
6837

(f1c9645dbc14efddc7d8a322685f26eb should be 10M empty space)

Is it possible that zero blocks of block devices are ignored?

esco

esco <esco>
Wed 23 Jan 2013 09:42:15 AM UTC, comment #6: 

Hello Heinz-Josef,

---
The answer is 'yes' if you make a touch to data/text.txt before the second backup. This means the file is scanned block-wise and the files (like your 00..0060) which cannot be hard linked are copied or compressed newly. Here you really found a bug. I think I was just able to fix it - but this needs more testing.
---

Is this something new that only happens with 3.3?

Because we a lot of corrupt backups, with missing files (parts of block devices).

Files listed in ".md5BlockCheckSums.bz2" are missing (up to 40%) and no error log entry is generated (like "could not create file/link for xy" in .storeBackup.log).

Details:
Backups are taken of LVM snapshots with the following options:
--linkToRecent ${lv_name}_last --logInBackupDir --checkDevices0 ${lv_path}_snapshot --checkDevicesDir0 volumes --checkDevicesBS0 10M --sourceDir /root/empty --backupDir $backup_dir --series ${lv_name} --checkDevicesCompr0 yes

esco

esco <esco>
Wed 19 Dec 2012 09:05:16 AM UTC, comment #5: 

Hi Hajo,

checking for bit rot is even more tricky in the same run as the backup itself. You never know if the backup-ed data is really written to disk. To check this, you have to re-read the data. If you do this immediately, you will just read from the file system cache. To avoid this, you have to flush those caches after each file written (including the hardware cache of the disk or to switch it of completely). This would result in really disastrous performance. The other possibility is to flush caches at the end and start checking at the end of the backup - but this is the same as starting storeBackupCheck*.pl afterwards ;-) .

I'll mark this thread as done (but feel free to comment again), because the way to go is clear.

Best Regards,
Heinz-Josef

Heinz-Josef Claes <hjclaes>
Group administrator
Mon 17 Dec 2012 01:41:49 PM UTC, comment #4: 

Hi Heinz-Josef,

thank you for the detailed explanation. 

My incorrect assumption was that a successful backup leads to a
consistent backup. I understand, that without this drawback, the fast
backup speed is not possible.

I think you are right. There could arise situations where some
paranoia checks will fail and not able to repair good enough. For my
current issue it might helped, but does not solve all things that can
happen.

Maybe the better way is to use the storeBackupCheckBackup or
storeBackupCheckSource for post backup checks, depending on what part
of the backup (destination, source, transport) you don't trust.

Especially for storeBackupCheckBackup this has the advantage that it
could run in background after the backup has finished.

So I will run storeBackupCheckBackup time by time to check for
consistency - and every time something went strange.




Yes, I agree with your thoughts about handling a broken backup.

I think a howto with some helper scripts to repair a broken
backup to make it at least consistent for further backups would be
nice, as you presented.


Best regards,
Hajo

Hajo Koehler <uf9>
Fri 14 Dec 2012 06:10:30 AM UTC, comment #3: 

Hi Hajo, thanks for your thoughts.

Naturally, if you manipulate the data, storeBackup.pl will not recognize this. And there is no chance that storeBackup.pl will be able to recognize manipulations with a "paranoid option" in all cases - think about lateLinks (which includes replication and isolated backups) because there simply is no data available to compare with.

Creating the real "paranoia" option means to combine storeBackupCheckBackup.pl, storeBackupCheckSource.pl and storeBackup.pl which would be very complicated and painfully slow. Also I think, the result would be unpredictable, because if there is some kind of bit rot, it's impossible or at least very error-prone to detect if it was in the source file, the backup file or the md5 sum. Bit rots also can happen in each other place of the file system, eg. in the file name.

Generally, I think it's more or less impossible to handle all possible cases automatically - and that's also the case because in case of hardware failure, other methods than correcting some bytes or files are necessary. This may be copying (rescuing) the whole stuff to another (a new) backup disk, running badblocks, a file system check, ... whatever. So trying to correct things automatically may make things even worse.

------

But the questions remains how to handle broken backups. Nearly two month ago, I got a request (via email) from someone who had hardware problems with his backup disk and thought he lost about 3000 files because of that. He was able to make a copy of that disk and asked for help to make his backup at least consistent again so he also can hard link new backups against the old ones and rescue the predominant part of his data. Because of the drawbacks described above of combining storeBackup.pl with storeBackupCheck*.pl we came to the following conclusion: I changed storeBackupCheckBackup.pl to (optionally) generate three different files with lists of "broken" files:
bugs-files.missing.txt
bugs-md5sums.missing.txt
bugs-md5sums.wrong.txt
This means, there is an easy to handle list of the three possible (detectable) types of errors because of bit rot (or maybe bugs in storeBackup.pl or storeBackupUpdateBackup.pl). With this files, he was able to correct his backup to a consistent state.

Next step I plan to do for the next release of storeBackup is to write a special program which takes that list as a base to delete files or to calculate new md5sums in the backup. The administrator has to make sure that data is copied, file system checks are made (etc.) before and to choose deletion or new md5 calculations. (I also plan to write such a list from storeBackupCheckSource.pl.) This also can be used to simply delete parts of the backup later for whatever reasons.

I think the advantage of this approach is combinable with lateLinks and last but not least it does not do unpredictable changes to the backup. In the discussions with that user of storeBackup we came to the conclusion not to try to solve hardware issues completely automatically because it's more or less impossible (impossible theoretically and especially practically) to solve those issues.

The best protection against (unresolvable) bit rot is to backup on two different hard disks / locations or to use replication. And naturally to check consistency.

What do you think about these thoughts?

Regards.

Heinz-Josef Claes <hjclaes>
Group administrator
Wed 12 Dec 2012 03:12:45 PM UTC, comment #2: 


> Do you think it makes sense to add a new switch to always scan all
> blocked files? This would at least help in your situation (if you do
> not use eg. replication)


Yes, and additionally, I tried a little bit and found a similar
behavior when a simple file is corrupted:

--------------------------< schnipp >--------------------------
echo 'asdf' > data/test.txt
storeBackup.pl -s data -b backup
echo '' >> backup/default/$firstbackup/test.txt
storeBackup.pl -s data -b backup
--------------------------< schnapp >--------------------------

The second backup is broken too.

> What's your opinion?


I think a "paranoid" option that ensures a consistent backup not by
assuming old consistent backups would be a nice feature.

Inconsitencies should be reported.

One could run a backup with this option occasionally. For example
sundays or nightly when speed is not so important.

> It makes sense to run storeBackupCheckBackup ...


> 2. make a new backup without linking to the old one (same
> configuration). Then copy/link the new one with linkToDirs.pl to
> your old backups. (This may be time consuming.)


For my real issue I did storeBackupCheckBackup for all backups and
moved the partial broken backups to another directory and the next
backup was bug free then.


Hajo Koehler <uf9>
Tue 11 Dec 2012 05:54:04 PM UTC, comment #1: 

Shouldn't the new backup be consistent?

That's a good question in this case.

----------
The answer is 'yes' if you make a touch to data/text.txt before the second backup. This means the file is scanned block-wise and the files (like your 00..0060) which cannot be hard linked are copied or compressed newly. Here you really found a bug. I think I was just able to fix it - but this needs more testing.
----------
The answer is 'no' if ctime, mtime and size didn't change (like in your example), there is an optimization in storeBackup: it sees the file hasn't changed and therefore the directory (eg. backup/.../text.txt in your example) is simply created in the new backup and its contents is simply hard linked to the old one.
Only setting hard links is a great performance improvement if some of the big files do not change.
----------
The answer is always 'no' if you use lateLinks. And for some nice features like replication lateLinks is necessary.
----------

Do you think it makes sense to add a new switch to always scan all blocked files? This would at least help in your situation (if you do not use eg. replication)

What's your opinion?

It makes sense to run storeBackupCheckBackup ...

-----
-----

Ok, how can you link to your old backups and get a bug free new backup - I see two possibilities:

1. Send me an email and I will send you the corrected version (without guarantee, I still have to test it, but it seems to be able to handle your problem.)

2. make a new backup without linking to the old one (same configuration). Then copy/link the new one with linkToDirs.pl to your old backups. (This may be time consuming.)


Heinz-Josef Claes <hjclaes>
Group administrator
Tue 11 Dec 2012 03:45:24 PM UTC, original submission:  

Due some errors on the storage some files were deleted in a backup.

When I did the next backup cycle, these files were still missing in
the new backup too.

Shouldn't the new backup be consistent?



To reproduce:

# what to backup
mkdir data

# make "big" file
perl -e 'print "asdfasdf" x 100000' > data/test.txt

# where to store backup
mkdir backup

# first backup
storeBackup.pl -s data -b backup --checkBlocksSuffix '.txt' --checkBlocksMinSize 10k --checkBlocksBS 10k --exceptSuffix '.txt'

# make backup broken
firstbackup=`ls -1 backup/default | tail -1`
rm backup/default/$firstbackup/test.txt/0000000060

sleep 2

# next backup
storeBackup.pl -s data -b backup --checkBlocksSuffix '.txt' --checkBlocksMinSize 10k --checkBlocksBS 10k --exceptSuffix '.txt'

# check backups
storeBackupCheckBackup.pl -c backup/default




storeBackupCheckBackup reports both backups be broken.


Hajo Koehler <uf9>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by esco (Posted a comment)
  • -email is unavailable- added by hjclaes (Posted a comment)
  • -email is unavailable- added by uf9 (Submitted the item)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follow 4 latest changes.

    Date Changed by Updated Field Previous Value => Replaced by
    2013-09-30 hjclaes StatusIn Progress Done
    2013-01-23 hjclaes StatusDone In Progress
    2012-12-19 hjclaes StatusIn Progress Done
    2012-12-11 hjclaes StatusNone In Progress

    Back to the top

    Powered by Savane 3.13-f8d8.
    Corresponding source code