bugstoreBackup - Bugs: bug #40084, StoreBackupUpdateBackup removes...

 
 

bug #40084: StoreBackupUpdateBackup removes lock file even if remote process is running

Submitter:  None
Submitted:  Sun 22 Sep 2013 06:05:24 AM UTC
   
 
Category:  None Severity:  3 - Normal
Item Group:  None Status:  Fixed
Privacy:  Public Assigned to:  None
Open/Closed:  Open
* Mandatory Fields

Add a New Comment Rich Markup
   

Mon 30 Sep 2013 12:40:52 PM UTC, comment #5: 

documentation is changed in 3.4.2

Heinz-Josef Claes <hjclaes>
Group administrator
Mon 23 Sep 2013 07:00:39 AM UTC, comment #4: 

Hi, let me just add:

You wrote about lots of files and long backup duration (or long updateBackup duration). Maybe some of these hints help:

- as far as I know, ext4 is by far the fastest file system for making hard links (don't know about btrfs, but that's not the right one for making backups up to now I think)

- make an entry in /etc/fstab for /tmp as a ramdisk, eg:
none            /tmp    tmpfs           size=2048M
and use option saveRAM (with storeBackup.pl). The implementation of hashes in perl is pretty poor: needs lot of memory and is slow. If you use saveRAM then Berkeley DB files are used in the ram disk (--tmpdir). The result is that you need less memory and that it's faster. Because you need less memory, more memory is available for caching which should speed up thinks additionally.

- Here:
http://code.google.com/p/compressible/wiki/StoreBackupTips
you find some hints about "Memory usage and OS tuning".
I never tested this, but maybe it helps.

Hope this helps!?
Heinz-Josef

Heinz-Josef Claes <hjclaes>
Group administrator
Sun 22 Sep 2013 11:56:00 AM UTC, comment #3: 

I also agree  ;-)

I added your remark to the man pages and in somewhat longer form to the documentation.

Heinz-Josef

Heinz-Josef Claes <hjclaes>
Group administrator
Sun 22 Sep 2013 10:53:09 AM UTC, comment #2: 

I agree, the 'right' thing to do is probably just a single atomic database commit per 'lockfile' to lock the ability to do backups across all servers.  Most likely one of the tools you mention does something along those lines, although I'd probably just rewrite the checkLockFile implementation to add DB operations instead as a one-off.

My reason for filing a bug was not so much looking for help to figure this scheduling problem out, but rather to question the behavior of the lockfile when running across multiple servers is the norm.  Noting that there are corner cases, and that other schedulers or databases are required to resolve them is, in my mind, better than failing on a common way of doing things.  I would just turn off latelinks but my backup takes >24hr then (~4TB with many, many small files).

That said, adding a note in the manpages in the lockfile section indicating that it explicitly does not work across multiple servers and is not designed to separate storeBackup and storeBackupUpdateBackup or any other storeBackup process in a separate PID space would satisfy my concern about having unexpected behavior.

Anonymous
Sun 22 Sep 2013 07:14:51 AM UTC, comment #1: 

Hi,

first of all, you can ran storeBackupDel during the daylight time (where normally nothings else runs) to decouple deletion from the rest. But that's not an answer to your question.

If you use lock files eg. via NFS for different computers, you cannot be sure this works in all cases. You may have a delay from network / NFS so your locking does not work if you just have bad luck. (Or eg. NFS may block because of network issues.)

Sometimes I get requests about this - suggesting to realize this or that solution. But all requirements are somehow different, eg. wait until A and B are finished on their boxes and start C on another box, maybe with other restrictions also.

If you really want to make this work, you need a job scheduler (or an enterprise job scheduler). As far as I know, a widely used one is uc4 (not free software). Searching a little bit, I found the following tools:
artznet.sourceforge.net/
http://www.taskforest.com/
http://www.acelet.com/super/SuperScheduler/index.html
http://jcrontab.sourceforge.net/
http://www.sauronsoftware.it/projects/cron4j/
http://www.sos-berlin.com/modules/cjaycontent/
https://github.com/airbnb/chronos
https://github.com/resque/resque-scheduler
Probably, there are also schedulers packaged in the distribution you are using.

I have no idea if / which one of the listed fulfills you needs - I simply do not have any experience with these tools. But I think this is the right way to go. Maybe you should start looking for a scheduler which is simple to use and not overloaded with functionality.

Another reason for not "reinventing the wheel" in storeBackup: Reasons for storeBackup to be started / waiting or not started at all may be out of storeBackup's control (eg. status of LVM snapshots, data base status, network connection, ...). So storeBackup just offers a simple interface for simple needs or an interface to be used in tools for job scheduling.

If you have a look at such tools, it would be nice to publish your results here. Maybe you could also help me to write some notes in the documentation.

Hope this helps?

Heinz-Josef Claes <hjclaes>
Group administrator
Sun 22 Sep 2013 06:05:24 AM UTC, original submission:  

StoreBackupUpdateBackup comes with a warning that it should not be run in parallel with StoreBackup.  To that end I use lockfiles and run at different times.  Mostly this is OK because of the timing separation.

However, on occasion, the backup takes longer than usual.  In these instances, the update part starts running and happily deletes the lockfile from the remote process.  (In fact, the reason it's taking so long this time is because of this locking behavior messing up my backups until the remote storage ran out of space, and now there's a ton of old ones to delete at once that never got Update ran... basically it spiralled out of control.)

I have made both servers respect the lockfile, even if they cannot find a local process, by inserting this into the 'else' case for checkLockFile:
$prLog->print('-kind' => 'E',
  '-str' => ["cannot start, old (REMOTE) instance with pid " .
     "<$pid> may be running"],
  '-exit' => 1);

I don't know if this is a bug persay, but it was unexpected behavior to me.  I expected that if a lockfile existed, then storeBackup should not proceed.

I know that in power-loss or similar situations, this may cause a problem, but I'd rather have it do the right thing when everything is fine, and only mess up in extraordinary situations.

Anonymous

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by hjclaes (Posted a comment)
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follows 1 latest change.

    Date Changed by Updated Field Previous Value => Replaced by
    2013-09-30 hjclaes StatusNone Fixed

    Back to the top

    Powered by Savane 3.13-4448.
    Corresponding source code