mainstoreBackup - Support: sr #108861, Set up backup server that jails...

 
 

sr #108861: Set up backup server that jails users in their respective directories, yet links and gives full recovery access?

Submitter:  no realname <cryptofriend>
Submitted:  Tue 04 Aug 2015 11:41:33 AM UTC
   
 
Category:  None Priority:  5 - Normal
Severity:  3 - Normal Status:  None
Privacy:  Public Assigned to:  None
Open/Closed:  Open Operating System:  GNU/Linux
* Mandatory Fields

Add a New Comment Rich Markup
   

Jump to the original submission

Thu 20 Aug 2015 03:45:38 PM UTC, comment #12: 

Hello

> 1. The linkToDirs tool does not seem to cover my use case.

There is away to use linkToDirs.pl in the way you want. Have a look at the following example:

hjc@fuhjc:~$ mkcd /tmp/a
hjc@fuhjc:/tmp/a$ ls
hjc@fuhjc:/tmp/a$ mkdir s
hjc@fuhjc:/tmp/a$ cd s
hjc@fuhjc:/tmp/a/s$ echo mist > s
hjc@fuhjc:/tmp/a/s$ echo mist > s2
hjc@fuhjc:/tmp/a/s$ cd ..
hjc@fuhjc:/tmp/a$ mkdir t
hjc@fuhjc:/tmp/a$ /opt/storeBackup/bin/linkToDirs.pl -w s s -t t
BEGIN     2015.08.20 17:30:29  5320 copy/link <s> to <s> to <t>
VERSION   2015.08.20 17:30:29  5320 linkToDirs.pl, 3.5.1 pre
INFO      2015.08.20 17:30:29  5320 start reading linkWith dirs </tmp/a/s>
INFO      2015.08.20 17:30:29  5320 start copying </tmp/a/s>
INFO      2015.08.20 17:30:29  5320 setting atime, mtime of directories ...
STATISTIC 2015.08.20 17:30:29  5320 read 2 items; created 1 dirs, 2 hard links, 0 copied; calced 2 md5 sums,
END       2015.08.20 17:30:29  5320 copy/link <s> to <s> to <t>
hjc@fuhjc:/tmp/a$ ls -li *
s:
insgesamt 8
68618 -rw-rw-r-- 3 hjc hjc 5 Aug 20 17:28 s
64063 -rw-rw-r-- 1 hjc hjc 5 Aug 20 17:28 s2

t:
insgesamt 0
62222 drwxrwxr-x 2 hjc hjc 80 Aug 20 17:28 s
hjc@fuhjc:/tmp/a$ ls -li s t/s/
s:
insgesamt 8
68618 -rw-rw-r-- 3 hjc hjc 5 Aug 20 17:28 s
64063 -rw-rw-r-- 1 hjc hjc 5 Aug 20 17:28 s2

t/s/:
insgesamt 8
68618 -rw-rw-r-- 3 hjc hjc 5 Aug 20 17:28 s
68618 -rw-rw-r-- 3 hjc hjc 5 Aug 20 17:28 s2
hjc@fuhjc:/tmp/a$

Now you have to delete the original directory (s).

linkToDirs.pl does not support directly what you want to do, because if the changing of duplicate files to hard links is interrupted you may lose data. Doing the job in the way shown above has no risk to lose data. (But it's slower.)


Heinz-Josef Claes <hjclaes>
Group administrator
Wed 19 Aug 2015 08:41:16 PM UTC, comment #11: 

Oh, and I almost forgot: Doesn't the fact that the backup server eventually creates inter-series-links in your example rely on the fact that the backup server provides client2 with the metadata of client1's backup once it's merged and updated? So even if access to files is separated, metadata is shared, isn't it?

no realname <cryptofriend>
Wed 19 Aug 2015 08:35:21 PM UTC, comment #10: 

Hello Look,


I'll go over your comments in your order.

1. The linkToDirs tool does not seem to cover my use case. It allows to copy from A to B whilst linking to C. What I want is basically to (incrementally) copy from A to A whilst linking to A. In effect: Do not copy anything, just recursively go through a dir (precisely: the backup root), checksum the files and link them. This is exactly what this hardlink tool I linked below does. I wouldn't even know how to make use of linkToDirs in my current setup.


2. Well, I did consider this, too, but rejected it for two reasons:
        1) Most importantly: If you want to do root-backups, your backup server will have to have root on all machines. And although, admittedly, a machine that does not offer services other that SSH (for administration, not for the active backups) and does not run a browser is arguably the safest machine in most networks, I shudder at the thought of having a single-point-of-failure-machine that grants root to all others when compromised.
        2) It just does not match my setup. The backup server is an old computer with (relatively) high energy consumption. Therefore, in my setup, the backup server is the machine to be wol'd by the clients. Since I plan to do backups only about once a week (just about everything I do is either sent by mail, uploaded to a server, pushed to a git repo...), it should not be running most of the time.

        And yet, an active server that has read-only-root on all machines might be acceptable if every file was gpg-signed. One could create a seperate, passwordless key on the client used exclusively for backup signing and exclude it from the backup. However, storeBackupRecoverBackup would have to implement gpg-checking (or you would write a pre-restore-script). Then, in my case, I would tell a raspberrypi to wake up the backupserver at a scheduled time, then schedule the server to wake up the clients, then do a read-only-backup as root. This is maybe something to put some more thought into...

3. I agree. It's all depends on

4. To be honest, I had to sketch your example to get my head around it (maybe interesting for those following the thread: http://fs1.directupload.net/images/150819/5sn232cc.png). I see your point and that successful links will propagate to later backups, eventually having time eliminate duplicates. But what I noticed just now though: When you do inter-series-hardlinking plus chrooting (and thus non-root-access for recovery) you have to make sure that all clients can read their own files, even if they are linked to a file in a different backup with different permissions. For now, I solve this by externally chgrp-ing all files to "backupusers". I might just change the respective xxx_storebackup users' primary group to "backupusers". That might do the trick. Yet, I guess, this is something you might also have to take care of yourself, even if you were doing server initiated backups and leave all the interbackup linking to storebackup. I mean, it has to be covered somehow and that somehow depends on the server's users and groups which storebackups isn't aware of. How do you handle (isolated) access to a user's backup? You do that, right? You must've run into this issue, too: Only one inode vs. n potential clients that have legitimate read access.

And finally: This would not be an issue after all if you could run all backups as the same server side user and just select the chroot dir based on the ssh key fingerprint. Unfortunately, openssh's Match directive does not allow matching by ssh-key fingerprint.

Best

cryptofriend

no realname <cryptofriend>
Wed 19 Aug 2015 07:28:56 AM UTC, comment #9: 

Hi cryptofriend,

as there is a lot to cover, I will try to give it some structure.

1. storeBackup will probably not support such things in future. Simply because it already does. You might want to take a look at the documentation. At tools like linkToDirs for example http://www.nongnu.org/storebackup/de/node41.html

2. A more elegant way, that came to my mind, might be (depending on the actual setup) doing the backup in reverse. Why is it always the client that has to do the backup? Why not let the server mount the clients HDD and pull the backup? Instead of the client pushing its files to the server, the server also could pull them from all the clients. This might be especially help full with "nightly" backup-solutions. Since the server does all the timing and stuff, it can wake up the clients via WOL in the middle of the night and start pulling its backups.

3. Back to the plan I first mentioned. You pointed out, that doing the first backup for a client would be a problem. Well you are partly right with that. It's not that well manageable with that setup. But for doing the first backup in a series I would be making the masterbackup writable. Since I'm not adding a new backup series every two weeks (not even every two years), it would be quite easy to make the environment somewhat controllable for that short period.

4. Not deduplicated files within the isolated mode. Well, not every file will be deduplicated in the first place. But they will in the long run. Lets say you have two backupclients and you already have done the masterbackup and setup isolatedMode. Now you add file A to both clients and file B only to client1, then do a backup of both clients (lets call it backup1). After you have merged the isolated backups and ran storeBackupUpdateBackup, backup1 contains two times file A (once for every client) and once file B for client1. No hardlinks for these files. Now you setup isolatedMode again, add file B to client2 as well and do your backups (backup2). Again you merge them and run storeBackupUpdateBackup. Now backup2 client1 file A is a hardlink to backup1 client1 file A, also backup2 client1 file B will be a hardlink to backup1 client1 file B. But here comes the catch. Backup2 client2 file A will be a hardlink to backup1 client1 file A, and backup2 client2 file B will also be hardlinking to backup1 client1 file B. So backup1 client2 file A will be sitting there lonly, without any hardlinks pointing to it. In time, when you delete backup1, there will only be remaining one version of file A remaining in the backup. Yes, this example is simplified, I know, but it works somewhat like this. I discovered this with my current setup (and a nice hint from Heinz-Josef*), where I don't use isolatedMode, but sometimes having the two or more clients doing there backups at once. Since both backups are not finished, both clients can not know what the other one will add. So it's unavoidable that some files are duplicated by doing so. But as I remove old backups regularly (storeBackupDel runs every night, right after storeBackupUpdateBackup) this is not much of an issue.

So, hopefully I didn't forget about a topic and this might help you.

*) By the way, @hjclaes: I never came back to report about my solutions for the Windows clients. This is due to the fact, that I managed to replace them with LinuxMint. Which does simplify not only the backup situation ;)

Greetings

Look

Anonymous
Tue 18 Aug 2015 11:21:39 AM UTC, comment #8: 

Hey,

I guess I found a solution, or rather, a workaround in the meantime. There is a tool called "hardlink" (https://packages.debian.org/jessie/hardlink). It takes a directory as an argument, looks for duplicate files and links them. I'm afraid it is by far not as sophisticated as storeBackup but it actually does the trick.

The current plan is to regularly backup in normal (not isolated) mode, then run storeBackupUpdateBackup on the server. Then, via cron, recursively chgrp -R backupusers $BACKUPROOT && chmod g+rw $BACKUPROOT && hardlink $BACKUPROOT.

This gives me storeBackups deduplication on the series level and still, without storeBackups knowledge or intervention I can hardlink duplicate files that may have been created. I can browse through all my files as (local) root and copy single files. Once I need to restore a file or folder with the original permissions, storeBackupRecovery will allow me to do so.

I am still curious to find out if somebody has a more elegant solution or if storeBackup will even actively support such endeavor in the future.

no realname <cryptofriend>
Sun 16 Aug 2015 05:27:16 PM UTC, comment #7: 

Hello Lookbehind,

and thank you for your post. As you might have guessed, my plan was apparently very similar to yours. Sure, I do not use CIFS/NFS, but SSHFS will do the job just fine.

I know what the isolated mode was designed for. I just hoped that it might be implemented in a way that's like: "Oh, no previous backup's metadata found? Seems like the user needs a full backup that however can be merged into an (empty) master backup."

That is because the problem with making an initial full backup is - if I understood Heinz-Josef correctly - a full, non-isolated backup (which is what the initial backup would be) needs access to ALL OTHER BACKUPS because it'll have to find out which files not to upload but leave to the server for lateLinking. Any initial backup that has access to the other backups breaks our contraints. Any backup that does not have access the other users' series will not be able to link against them.

The crucial question to me is whether storebackup ever deletes files. Or, to be more specific, does it ever remove hard links once it finds out that despite doing its best not to upload duplicates from the client side they do exist? If I am not mistaken, storeBackupUpdate only draws its jobs from the metadata files that the client created. Those are, in our case, neccessarily incomplete. I suppose I'd need something like storeBackupTrimBackups (or so), the job of which is to run over the backup root and find files with identical checksums but different inodes, then delete the latter and instead create a hard link to the former.

Let's go through your scenario step by step: On client1 you create a full master backup to the r/w-share with lateLinks=yes. You run storeBackupUpdateBackup on the server. Then client2 does the same. As it does not have access to the backup root, it will duplicate files (which would be perfectly acceptable for me in the first place). You run storeBackupUpdatebackup on the server and it sets the hard links, but only within the client2-backup (tried it). Now you can run isolated backups. Client1 copies the metatdata of client1-masterbackup and pushes the increments to the server. It cannot work with the metadata of the client2-masterback, since that would break the isolation. So now you run storeBackupMergeIsolatedBackup on the server and you will end up with a yet-to-be-linked backup. You run storeBackupUpdate, but I will - I guess - only link against the client specific series.

Did you keep that in mind in your scenario? Is it even entirely correct? Could you elaborate on how exactly you plan to overcome this master-backup-with-no-access-to-other-backups problem?

Greetings to (and from) Germany

no realname <cryptofriend>
Fri 14 Aug 2015 12:39:33 PM UTC, comment #6: 

Hi cryptofriend,

I think your Isolated-Backup fails because you didn't make a normal Full-Backup in the first place. The intention for the isolated backups was to be able to make backups while on the road, without having the big masterbackup available. When returning home, one would simply include the isolated backups in the big masterbackup and go in with this one.

So you would have to create a normal backup first, then generate the configs for the isolated mode, do your isolated backup and merge that to the normal backup afterwards.

I have a somewhat similar issue with my current backup setup, and have already thought about rearranging it all around this isolated mode. I haven't done this yet thou, because I don't have the necessary Hardware yet.
Other than you I'm not that much concerned about encryption. Not on the disk as it would only help, if someone would gain physical access to the disk, which would imply some other issues that would be far more frightening than loosing my backups. Also not on the Network, because I'm going to use a dedicated V-LAN for the backup-purposes.
But similar to you, I'm concerned about other users having access to data in the backup, they should not be able to. Not because I don't trust the users themselvs, but who knows what kind of malicious software does evil things to the backups? Having write-access to the backup-folder it might even delete all backups at once.

So here is my plan, how this might work out for me. Maybe it could help you as well.
My Backup-Server will have a main folder for all backups of all series. Something like /backups/SeriesA/ /backups/SeriesB/ ... this will hold the actual Backups. It will be accessible through a CIFS-Share with read-only permissions and via Access-Controll of CIFS I will make sure, that every user only has access to the backup-series that belongs to him. This is for "small" restoring purposes. Like retrieving that important movie one accidentally removed ;)
For bigger restoring purposes like restoring the entire system I can always create a temporary NFS share with read-only for that series.
There will be a second folder /new-backups/ (or something like that) that will be accessible via NFS and will have write permissions. This will be the folder new backups will be stored to as an isolated-mode backup. Via cron sB will merge them to the big master backup every night, clean out the /new-backups/ dir right afterwards and create the new config for isolated backups.
This way no user could delete ALL backups at once, but at a maximum the new ones of this particular day (one could shorten the time by having run that cron-job more than once a day, or even having it triggered by the backup-process itself). The important part of the plan is, that creating a backup and restoring one, is done by different shares and both shares do not get access to the same folder. Only the merge-process on the server does. And since the server has access to all series, he also is able to deduplicate all the stuff.
Also because writing and reading backups is done by different shares, one could easily limit access to certain files/folders in the big backup. I guess this could even be done with ssh, because all you need is some sort of authentication.

The downside with this plan would be, having downloaded the same file to multiple machines at the same day, and then created a backup of that, would result in the same file multiple times in the backup for that one day. But since I delete my old backups on a regular basis (--keepWeekDays and so on), this will not be a huge problem in the long term (hopefully).

But as I said, this is the plan. I haven't done so yet, so I can't tell about any possible issues that I didn't think of yet.


Greetings from Germany

Lookbehind

Anonymous
Tue 11 Aug 2015 06:17:57 PM UTC, comment #5: 

Just had a quick glance at the code regarding the error:

my ($metaDataBackup, @isolatBackups);
{
    local *DIR;
    my $isolDir = "$isolateBackupDir/$series";
    opendir(DIR, $isolDir) or
        $prLog->print('-kind' => 'E',
                      '-str' => ["cannot opendir <$isolDir>, exiting"],
                      '-exit' => 1);
    my ($entry, @entries);
    while ($entry = readdir DIR)
    {
        next if (-l $entry and not -d $entry);   # only directories
        push @entries, $entry
            if $entry =~
            /\A(\d{4})\.(\d{2})\.(\d{2})_(\d{2})\.(\d{2})\.(\d{2})\Z/o;
    }
    closedir(DIR);

    $prLog->print('-kind' => 'E',
                  '-str' =>
                  ["didn't find a backup in $isolDir, exiting"],
                  '-exit' => 1)
        if @entries < 2; <-- shouldn't this be 1?

    ($metaDataBackup, @isolatBackups) = sort (@entries);

#print "metaDataBackup = $metaDataBackup\n";
#print "isolatBackups = @isolatBackups\n";
}

I see why calling storeBackupMergeIsolatedBackup.pl fails in my case. I have only one unmerged backup in my folder. Is this intended behavior? And if so, why?

Thanks in advance!

no realname <cryptofriend>
Tue 11 Aug 2015 11:17:31 AM UTC, comment #4: 

Hello,

so, I've been experimenting with isolated mode and ignorePerms and I seem to be getting closer. Still, this is quite tricky. I would like to share my exact configuration so that hopefully I can eliminate the vagueness of my previous posts.

## configuration on the backupserver:

# directory structure

  • needs to be root owned in order for ssh-chroot to work


root@backupserver:/home# ls -l
insgesamt 12
drwxr-xr-x 4 root      root      4096 Aug 11 11:14 8460p_storebackup
drwxr-xr-x 4 $USER     $USER 4096 Aug 11 11:10 $USER
drwxr-xr-x 3 root      root      4096 Aug 11 11:18 x200t_storebackup

root@backupserver:/home# cd 8460p_storebackup/
root@backupserver:/home/8460p_storebackup# ls -la
insgesamt 20
drwxr-xr-x 4 root              root              4096 Aug 11 11:14 .
drwxr-xr-x 5 root              root              4096 Aug 11 11:10 ..
drwxr-xr-x 3 8460p_storebackup 8460p_storebackup 4096 Aug 11 09:50 8460p_isolated
drwxr-xr-x 8 8460p_storebackup 8460p_storebackup 4096 Aug 11 12:14 8460p_storebackup
-rw-r--r-- 1 root              root               411 Aug 11 10:17 isolate-.storebackup.conf

root@backupserver:/home/x200t_storebackup# ls -la
insgesamt 20
drwxr-xr-x 4 root              root              4096 Aug 11 12:26 .
drwxr-xr-x 5 root              root              4096 Aug 11 11:10 ..
-rw-r--r-- 1 root              root               431 Aug 11 11:18 isolate-.storebackup.conf
drwxr-xr-x 2 x200t_storebackup x200t_storebackup 4096 Aug 11 12:26 x200t_isolated
drwxr-xr-x 2 x200t_storebackup x200t_storebackup 4096 Aug 11 12:26 x200t_storebackup

  • I'll try to run the backup isolated with the following config generated on the client and transferred to the server. Root-owned, because root can access the backup root. I'll exclude everything but boot and some test files which I'll create in /


root@backupserver:/home/x200t_storebackup# grep -v '^#\|^;\|^$' /home/x200t_storebackup/isolate-.storebackup.conf
sourceDir=/
backupDir="/home/x200t_storebackup/x200t_isolated"
mergeBackupDir="/var/backups"
series=x200t_storebackup
otherBackupSeries=0:8460p_storebackup 0:x200t_storebackup
exceptDirs=mnt etc opt root srv var proc usr run dev sys tmp home
cpIsGnu=yes
linkSymlinks=yes
followLinks=0
ignorePerms=yes
lateLinks=yes
lateCompress=yes
comprRule=0
debug=1
deleteNotFinishedDirs=yes
keepMinNumber=99999
logInBackupDir=yes

  • the second config is identical except for the series and (isolated) backup dirs


root@backupserver:/home/x200t_storebackup# diff /home/x200t_storebackup/isolate-.storebackup.conf /home/8460p_storebackup/isolate-.storebackup.conf
2c2
< backupDir="/home/x200t_storebackup/x200t_isolated"
---

> backupDir="/home/8460p_storebackup/8460p_isolated"

4c4
< series=x200t_storebackup
---

> series=8460p_storebackup


  • problem so far: We do not have any "true" backup root, since both users are chrooted. Therefore, use bind mounts:


root@backupserver:~# grep store /etc/fstab
/home/8460p_storebackup/8460p_storebackup /var/backups/8460p_storebackup none bind 0 0
/home/x200t_storebackup/x200t_storebackup /var/backups/x200t_storebackup none bind 0 0


  • now we do!


root@backupserver:~# ls -la /var/backups/
insgesamt 16
drwxr-xr-x  4 root              root              4096 Aug 11 11:16 .
drwxr-xr-x 13 root              root              4096 Aug  9 01:21 ..
drwxr-xr-x  2 8460p_storebackup 8460p_storebackup 4096 Aug 11 12:37 8460p_storebackup
drwxr-xr-x  2 x200t_storebackup x200t_storebackup 4096 Aug 11 12:26 x200t_storebackup

# users and groups

  • we'll be chrooting backupusers based on their gid


root@backupserver:/home# id x200t_storebackup
uid=1002(x200t_storebackup) gid=1003(x200t_storebackup) Gruppen=1003(x200t_storebackup),1001(backupusers)

root@backupserver:/home# id 8460p_storebackup
uid=1001(8460p_storebackup) gid=1002(8460p_storebackup) Gruppen=1002(8460p_storebackup),1001(backupusers)

# ssh config

root@backupserver:~# grep -A 10 backupusers /etc/ssh/sshd_config
Match Group backupusers
ChrootDirectory %h
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no

## on the client

  • we configure in fstab


user@8460p $ tail -n 1 /etc/fstab                                                                                                                                                           
8460p_storebackup@backupserver:/ /mnt/backup fuse.sshfs noauto,gid=0,uid=0,nomap=error,_netdev,IdentityFile=/root/.ssh/8460p_storebackup_rsa,reconnect 0 0

  • and make sure it's mounted


user@8460p $ mount | tail -n 1                                                                                                                                                              
8460p_storebackup@backupserver:/ on /mnt/backup type fuse.sshfs (rw,relatime,user_id=0,group_id=0,_netdev)

  • see what's inside


[root@8460p ~]# ls -la /mnt/backup/
total 16
drwxr-xr-x 1 root root 4096 Aug 11 12:31 .
drwxr-xr-x 1 root root   12 Aug  2 22:34 ..
drwxr-xr-x 1 root root 4096 Aug 11 09:50 8460p_isolated
drwxr-xr-x 1 root root 4096 Aug 11 12:37 8460p_storebackup
-rw-r--r-- 1 root root  418 Aug 11 12:31 isolate-.storebackup.conf

  • notice that both chroot and permission mapping has worked. Whilst being 1001:1002-owned on the server, these files are now accessible to root only


  • Time to start the first backup. First, a list of what's going to be backed up. We will create a file, write "test" to it and transfer it to the system's primary user.


[root@8460p /]# echo "test" > testfile
[root@8460p /]# chown 1000:1000 testfile
[root@8460p /]# ls -lan
total 25
drwxr-xr-x   1    0    0  150 Aug 11 12:50 .
drwxr-xr-x   1    0    0  150 Aug 11 12:50 ..
lrwxrwxrwx   1    0    0    7 Feb 15 22:57 bin -> usr/bin
drwxr-xr-x   4    0    0 1024 Aug  6 11:47 boot
drwxr-xr-x  19    0    0 3240 Aug 11 08:39 dev
drwxr-xr-x   1    0    0 3532 Aug 11 09:30 etc
drwxr-xr-x   1    0    0   62 Aug  4 10:33 home
lrwxrwxrwx   1    0    0    7 Feb 15 22:57 lib -> usr/lib
lrwxrwxrwx   1    0    0    7 Feb 15 22:57 lib64 -> usr/lib
drwxr-xr-x   1    0    0   12 Aug  2 22:34 mnt
drwxr-xr-x   1    0    0    0 Mar 24 08:41 opt
dr-xr-xr-x 234    0    0    0 Aug 10 20:49 proc
-rw-r--r--   1    0    0 1276 Jan  1  2015 README
drwxr-x---   1    0    0  292 Aug 11 12:50 root
drwxr-xr-x  25    0    0  640 Aug 11 11:08 run
lrwxrwxrwx   1    0    0    7 Feb 15 22:57 sbin -> usr/bin
drwxr-xr-x   1    0    0   14 Jan  8  2015 srv
dr-xr-xr-x  13    0    0    0 Aug 11 12:49 sys
-rw-r--r--   1 1000 1000    5 Aug 11 12:52 testfile
drwxrwxrwt  12    0    0  900 Aug 11 12:12 tmp
drwxr-xr-x   1    0    0   80 May 13 21:34 usr
drwxr-xr-x   1    0    0  138 May 13 21:34 var

  • now the backup, directly run as isolated. In the local config, I omitted the other backup series (x200t_storebackup) because it does not exist yet.


[root@8460p ~]# grep -v '^#\|^;\|^$' isolate-.storebackup.conf
sourceDir=/
backupDir="/mnt/backup/8460p_isolated"
mergeBackupDir="/mnt/backup"
series=8460p_storebackup
otherBackupSeries=0:8460p_storebackup
exceptDirs=mnt etc opt root srv var proc usr run dev sys tmp home
cpIsGnu=yes
linkSymlinks=yes
followLinks=0
ignorePerms=yes
lateLinks=yes
lateCompress=yes
comprRule=0
debug=1
deleteNotFinishedDirs=yes
keepMinNumber=99999
logInBackupDir=yes


[root@8460p ~]# storeBackup.pl -f isolate-.storebackup.conf
...
debug output
...

STATISTIC 2015.08.11 13:01:08 18202  [sec] |      user|    system
STATISTIC 2015.08.11 13:01:08 18202 -------+----------+----------
STATISTIC 2015.08.11 13:01:08 18202 process|      0.61|      0.20
STATISTIC 2015.08.11 13:01:08 18202 childs |      0.52|      0.14
STATISTIC 2015.08.11 13:01:08 18202 -------+----------+----------
STATISTIC 2015.08.11 13:01:08 18202 sum    |      1.13|      0.34 => 1.47 (1s)
STATISTIC 2015.08.11 13:01:08 18202                    directories = 21
STATISTIC 2015.08.11 13:01:08 18202                          files = 351
STATISTIC 2015.08.11 13:01:08 18202                 symbolic links = 4
STATISTIC 2015.08.11 13:01:08 18202                     late links = 0
STATISTIC 2015.08.11 13:01:08 18202                    named pipes = 0
STATISTIC 2015.08.11 13:01:08 18202                        sockets = 0
STATISTIC 2015.08.11 13:01:08 18202                  block devices = 0
STATISTIC 2015.08.11 13:01:08 18202              character devices = 0
STATISTIC 2015.08.11 13:01:08 18202      new internal linked files = 0
STATISTIC 2015.08.11 13:01:08 18202               old linked files = 0
STATISTIC 2015.08.11 13:01:08 18202                unchanged files = 0
STATISTIC 2015.08.11 13:01:08 18202                   copied files = 347
STATISTIC 2015.08.11 13:01:08 18202               compressed files = 0
STATISTIC 2015.08.11 13:01:08 18202                  blocked files = 0
STATISTIC 2015.08.11 13:01:08 18202    excluded files because rule = 0 (0.0 )
STATISTIC 2015.08.11 13:01:08 18202    included files because rule = 0 (0.0 )
STATISTIC 2015.08.11 13:01:08 18202         max size of copy queue = 342
STATISTIC 2015.08.11 13:01:08 18202  max size of compression queue = 0
STATISTIC 2015.08.11 13:01:08 18202            calculated md5 sums = 694
STATISTIC 2015.08.11 13:01:08 18202                    forks total = 12
STATISTIC 2015.08.11 13:01:08 18202                      forks md5 = 7
STATISTIC 2015.08.11 13:01:08 18202                     forks copy = 5
STATISTIC 2015.08.11 13:01:08 18202                    forks bzip2 = 0
STATISTIC 2015.08.11 13:01:08 18202                  sum of source =  49M (51012077)
STATISTIC 2015.08.11 13:01:08 18202              sum of target all =  49M (51012077)
STATISTIC 2015.08.11 13:01:08 18202              sum of target all = 100.00%
STATISTIC 2015.08.11 13:01:08 18202              sum of target new =  49M (51012077)
STATISTIC 2015.08.11 13:01:08 18202              sum of target new = 100.00%
STATISTIC 2015.08.11 13:01:08 18202             sum of md5ed files =  49M (51012077)
STATISTIC 2015.08.11 13:01:08 18202             sum of md5ed files = 100.00%
STATISTIC 2015.08.11 13:01:08 18202     sum internal linked (copy) = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202    sum internal linked (compr) = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202          sum old linked (copy) = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202         sum old linked (compr) = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202           sum unchanged (copy) = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202          sum unchanged (compr) = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202                 sum new (copy) =  49M (51012077)
STATISTIC 2015.08.11 13:01:08 18202                sum new (compr) = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202     sum new (compr), orig size = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202                 sum new / orig = 100.00%
STATISTIC 2015.08.11 13:01:08 18202       size of md5CheckSum file =  11k (11495)
STATISTIC 2015.08.11 13:01:08 18202     size of temporary db files = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202            deleted old backups = 0
STATISTIC 2015.08.11 13:01:08 18202            deleted directories = 0
STATISTIC 2015.08.11 13:01:08 18202                  deleted files = 0
STATISTIC 2015.08.11 13:01:08 18202           (only) removed links = 0
STATISTIC 2015.08.11 13:01:08 18202 freed space in old directories = 0.0  (0)
STATISTIC 2015.08.11 13:01:08 18202       add. used space in files =  49M (51023572)
STATISTIC 2015.08.11 13:01:08 18202                backup duration = 3s
STATISTIC 2015.08.11 13:01:08 18202 over all files/sec (real time) = 117.00
STATISTIC 2015.08.11 13:01:08 18202  over all files/sec (CPU time) = 238.78
STATISTIC 2015.08.11 13:01:08 18202                      CPU usage = 49.00%
INFO      2015.08.11 13:01:08 18202 removing lock file </tmp/storeBackup.lock>
END       2015.08.11 13:01:08 18202 backing up directory </> to </mnt/backup/8460p_isolated/8460p_storebackup/2015.08.11_13.01.05>

  • now check. First, on the local mount as root:


[root@8460p 2015.08.11_13.01.05]# pwd
/mnt/backup/8460p_isolated/8460p_storebackup/2015.08.11_13.01.05
[root@8460p 2015.08.11_13.01.05]# ls -la
total 76
drwxr-xr-x 1 root root  4096 Aug 11 13:01 .
drwx------ 1 root root  4096 Aug 11 13:01 ..
drwx------ 1 root root  4096 Aug 11 13:01 boot
-rw-r--r-- 1 root root 11495 Aug 11 13:01 .md5CheckSums.bz2
-rw------- 1 root root   888 Aug 11 13:01 .md5CheckSums.info
-rw------- 1 root root  1276 Aug 11 13:01 README
drwx------ 1 root root  4096 Aug 11 13:01 .storeBackupLinks
-rw-r--r-- 1 root root 35380 Aug 11 13:01 .storeBackup.log
-rw------- 1 root root     5 Aug 11 13:01 testfile

  • The testfile is present (and locally root-owned), boot is present, too. The final destination directory is, of course, empty:


[root@8460p 8460p_storebackup]# pwd
/mnt/backup/8460p_storebackup
[root@8460p 8460p_storebackup]# ls -la
total 8
drwxr-xr-x 1 root root 4096 Aug 11 12:56 .
drwxr-xr-x 1 root root 4096 Aug 11 12:31 ..

  • Now we check as root on the server side. All files do belong to the 8460p_storebackup, as they should.


root@backupserver:/home/8460p_storebackup/8460p_isolated/8460p_storebackup/2015.08.11_13.01.05# ls -la
insgesamt 76
drwxr-xr-x 4 8460p_storebackup 8460p_storebackup  4096 Aug 11 13:01 .
drwx------ 3 8460p_storebackup 8460p_storebackup  4096 Aug 11 13:01 ..
drwx------ 3 8460p_storebackup 8460p_storebackup  4096 Aug 11 13:01 boot
-rw-r--r-- 1 8460p_storebackup 8460p_storebackup 11495 Aug 11 13:01 .md5CheckSums.bz2
-rw------- 1 8460p_storebackup 8460p_storebackup   888 Aug 11 13:01 .md5CheckSums.info
-rw------- 1 8460p_storebackup 8460p_storebackup  1276 Aug 11 13:01 README
drwx------ 2 8460p_storebackup 8460p_storebackup  4096 Aug 11 13:01 .storeBackupLinks
-rw-r--r-- 1 8460p_storebackup 8460p_storebackup 35380 Aug 11 13:01 .storeBackup.log
-rw------- 1 8460p_storebackup 8460p_storebackup     5 Aug 11 13:01 testfile

  • Or, to put it in numbers


root@backupserver:/home/8460p_storebackup/8460p_isolated/8460p_storebackup/2015.08.11_13.01.05# ls -lan
insgesamt 76
drwxr-xr-x 4 1001 1002  4096 Aug 11 13:01 .
drwx------ 3 1001 1002  4096 Aug 11 13:01 ..
drwx------ 3 1001 1002  4096 Aug 11 13:01 boot
-rw-r--r-- 1 1001 1002 11495 Aug 11 13:01 .md5CheckSums.bz2
-rw------- 1 1001 1002   888 Aug 11 13:01 .md5CheckSums.info
-rw------- 1 1001 1002  1276 Aug 11 13:01 README
drwx------ 2 1001 1002  4096 Aug 11 13:01 .storeBackupLinks
-rw-r--r-- 1 1001 1002 35380 Aug 11 13:01 .storeBackup.log
-rw------- 1 1001 1002     5 Aug 11 13:01 testfile

  • now, on the server side, we use storeBackupMergeIsolated along with the previously mentioned isolation-configs:


root@backupserver:/home/8460p_storebackup/8460p_isolated/8460p_storebackup# cd 2015.08.11_13.01.05/
root@backupserver:/home/8460p_storebackup/8460p_isolated/8460p_storebackup/2015.08.11_13.01.05# storeBackupMergeIsolatedBackup.pl -f /home/8460p_storebackup/isolate-.storebackup.conf
ERROR   2015.08.11 13:13:43  1402 didn't find a backup in /home/8460p_storebackup/8460p_isolated/8460p_storebackup, exiting

  • But this fails. The backup is present, though, it says it in the path...
no realname <cryptofriend>
Sun 09 Aug 2015 06:03:53 PM UTC, comment #3: 

Hello Heinz-Josef

thank you for your response and thank you for making answering as easy as iterating through your points. :-)

1. The server I was talking about is my personal server under my physical control. Yet it is offering services on the internet through virtual machines to a very limited user base. Still, when using this server as a destination for a backup I feel better when it's encrypted. I am not even remotely considering feeding the big providers my (meta)data. :-)

2. I totally agree. I don't remember if I read that in your documentation or somewhere else but: Nobody wants backup. Everyone wants restore. Therefore it is best to make restore easy. If you have to "learn" how to restore, as other backup suites advise you to, you realize that the conveniences they offer come at a price.

3. This is why I setup a dedicated storebackup server. Its hard disk is dm-crypted. I thought to myself that I'd be ok storing my data - unencrypted at runtime -  on a local machine that does not offer anything except for SSH with pubkey on LAN.

4. I actually read that thread before I posted. I am not sure encryption on a file basis is relevant anymore when using dm-crypt. If I wanted to seperate users from one another (by encryption), then I'd have to use different keys resulting in different files and thus rendering inter-backup hard linking impossible. Makes me think I'm better off with dm-crypt, aren't I?

SSH:
Am I mistaken assuming that the uid/gid-issues are independent from the file transfer protocol? As you say, in the end, they're just numbers. Provided, I do not want to grant backupclients root, the problems arises with the series folder on the server. It just happens to have 1001:1002 in my case. 1001 is the backup user on the server. 1002 is the backupusers_group by which I ssh-chroot users. Now if I do not do any remapping on the client the result is that even when doing a backup as the (local) root, the server will write the files as 1001:1002. The result is: Whoever is the LOCAL user with 1001 is suddenly able to tinker with the root-files in the backup. The same applies to anyone in the LOCAL 1002 group.

Now if I do remapping to my local user, I end up in the same situation: Any program running with user rights gains write access to my full root backup.

As a remedy, I experimented with root-mounting the sshfs share, but there were other issues which I do not recall exactly. If I give it another try, I will make sure to test this again.

The ignorePerms option might be a thing... and it is apparantly related to another issue that I realized only after my second post might impact my entire backup strategy: The main reason I longed for deduplication and hard linking is when I download a movie or some mp3s from my media server to my local machine, then I do not want to be coerced to take care of whether these files have been backed up before. I just want the backup software to recognise that it is already there (because it's in the server's backup). Now I realized that the media directory on the server has the sgid-bit set. That means: Once I download a file from the server, its permissions have changed and it is thus not subject to hard linking anymore.

And here comes the question: Does the use of --ignorePerms also enable storebackup to hard link files that have the same content but different permissions? This, btw, seems to be the way the BackupPC software (also perl, also free software) realizes hard linking files of different permissions. Can --ignorePerms do the same?

USER IDs:
Due to the lack of both enryption and (sane) authentication, NFS is not an option. I guess, though, that the same limitations apply: Either you do a user remapping or you don't. Neither of those choices seems to be "clean" for now. What seems crucial to me is that you say that each backup process has to have access to older backups and also older backups of other clients. This must be the reason the inter-backup linking failed. I assumed it would be sufficient for storeBackupUpdateBackup to have access to all backup series. But, of course, if you want to minimize network payloads, you make the clients check which file really needs to be transferred.

Therefore, I will definitely look into the isolated mode. Maybe the combination of isolated mode and ignorePerms will get me closer to where I want to be. :-)

Thank you again!

no realname <cryptofriend>
Sun 09 Aug 2015 07:05:07 AM UTC, comment #2: 

Hi,

lot of stuff. I'll try to answer and not to forget your most important points ;-)


ENCRYPTION:

1.
I don't know which kind of server connected to the internet you are using for storing your backup - if it's (really) under your control or not. In general, my personal opinion is to be very careful about the data. If the data is copied (stolen) by "someone" today, it may be encrypted in 10 years or so, when processing power will be much cheaper or bugs in encryption software may be known. Or may be seed was manipulated, whatever. So, I think, best is not to deliver even encrypted data to those guys interested in everything.

2.
The first and most important aspect of storeBackup is (from documentation):
"The most important aspect of a backup tool is easy restoring from a transparent (native) storage format."
With this topic in mind, a build in encryption is problematic. Especially, if there are bugs in storeBackup. Therefore, I think it's best to use something outside of storeBackup. Something, which is proven by specialists and not just a hack for storeBackup.
Btw: The same problem of an intransparent backup format arises when using deduplication. StoreBackup avoids the typical complexities (indices, DBs) by simply using hard links, also for "blocked files".

3.
A good choice for encryption is something proven with (probably) good support in the future (my opinion). Therefore, I'm personally using dm-crypt which is part of the Torwalds kernel. This encrypts everything include file names, directory structure and permissions.

4.
If you want to encrypt on a file base, please have a look at:
http://savannah.nongnu.org/support/?108718


SSH:
As you mentioned; if you use ssh, you have to deal with it's limitations. There's an option --ignorePerms. Maybe this helps to avoid root permissions on the backup server!?
You may also use nfsv4. You can tunnel it via a vpn. Also, it should be possible to use ssh for tunneling - as far as I know nfsv4 only uses one port. But I've now experience about using nvs4 with an ssh tunnel.
Did you ever think to use replication to you replicate a local backup to another location?


USER IDs:
If you are using NFS, user (and group) IDs are just numbers. It doesn't matter of the user names or user IDs on an NFS client also exist on the NFS server. (This is not the case if you use the mapping delivered with NFSv4.)
If your NFS clients have incompatible user or group IDs, you should only give access to the individual backup of each client.
I think there's only one problem at the moment: You want to run backups with deduplication from different boxes. At least when running the backup, each backup process needs root permissions on the older backups. This means root has access to all other files. If you do not trust your root users on the different boxes to backup, here you have a real issue.
There's a solution to this and I will implement it in an optimized way. It's easy - take a look at "isolated mode" and then you should know what I mean.


HARD LINKING:

> Another one is that, contrary to my previous assumption, the inter-backup hard linking did not seem to have worked.


Please repeat you test with the actual version if necessary. It the problem still exists, post your exact configuration. For me, linking between different series in the same backupDir works. Maybe you found some combination with problems!?

Heinz-Josef Claes <hjclaes>
Group administrator
Thu 06 Aug 2015 11:27:10 AM UTC, comment #1: 

So...

I've put some more thought into this and I'm really beginning to question whether storebackup is the right tool for my needs.

To begin with, I reconsidered the illustration found at http://www.nongnu.org/storebackup/en/node20.html

To me it seems this layout works as expected iff the users/groups AND their respective uids/gids are the same on all three (logical) machines, that is backup client1, backup client2 and linker/compressor, which, in this illustration, may be assumed to run on one of the clients. So if you're using two similarly set up machines and back up to an external hard drive, this may apply.

But just consider for a moment that you have two real life users on each backup client. On client 1 it's primaryUser:1000 and secondaryUser:1001 and on the other machine it's the other way around: secondaryUser:1000 and primaryUser:1001. First of all, the linking will fail because the permissions are part of the inode. This is not a limitation of storebackup though, but important to keep in mind when aiming high hard link quotas.

So what happens is when you attach the assumed external hard drive to client2, and find files that belong to group 1004 (which may have been something like "workgroup" on client1) are now readable to whatever group gid 1004 represents on client2.

You could argue that phyical access anyway means root access and so client2 could access all files on the hard drive under any circumstances. This is part of the reason I wanted to use a backup server, so I could limit access to individual users.

In the meantime I made up my mind about how to properly set this up and this is what I've come up with (I haven't actually tried all of it yet):

On the server:

  • setup SSH on backupserver
  • chroot dedicated backup users in their root:root-owned home directories
  • create a subdirectory of the same name, owned backupuser:backupuser
  • bind mount all of the latter directories to /var/backups on the server


On the client:

  • create an fstab entry for sshfs that is root-mountable only and does not do any user mapping
  • create an fstab entry that is user-mountable and do a user mapping from backupser->localuser


Procedure as root on client (preferrably done via cron)

  • mount root-only-mountable sshfs
  • do a full root backup using lateLinks
  • umount sshfs


Procedure as root on server (preferrably done via cron)

  • run storeBackupUpdateBackup -b /var/backups once the backup is finished


Now that the permissions within the user backup directory on the server...

Procedure as user on the client:

  • mount user-mountable sshfs (with id-mapping of backupuser to localuser) and access the backup


There are a few drawbacks. The first one is that when you root-mount on the client, and your (remote) backupuser gid happens to match some random OTHER gid in your system, then despite being root-mounted, any local user who is member of that group has access to your entire backup. Again, you could argue that this very user could simply read files on the local filesystem instead of the remote one. But since this user has access to all previous backups as well, he might even read deleted files. I do not really know how to solve this in a clean way.

Another one is that, contrary to my previous assumption, the inter-backup hard linking did not seem to have worked. I thought this was the case during my first tests, but now I just created a file called "test", backed it up, updated the remote backup, created the (same) "test"-file on another machine, backed it up, updated the backup and yet the hard link counter was "1" for each file respectively. I did make sure to check as root on the backupserver, as sshfs does not report the actual inode count on the remote system.

So yeah, any thought, any input on this is welcome.

no realname <cryptofriend>
Tue 04 Aug 2015 11:41:33 AM UTC, original submission:  

Hello,

I know this isn't common, but I would first of all like to share my motivation for digging into the use cases of storebackup.

I value encryption very much and have been using duplicity for quite some time. Yet at some point - luckily when I did not need it - duplicity was not able to restore my backup. STFW did not help, the error seemed to be known though. So I was left with tons of incrementally intertwined, encrypted and compressed container files that I could very well decrypt and decompress - but I had no means to systematically search them. So I dropped the backup, created a new one (again, duplicity) and was left with the unpleasant feeling that errors like these might strike again right when I'd be in desperate need of a backup.

So once every year, I decide that my backup solution is both bloated and dangerous. Since I use two computers, one server and have another one to administer, guess what happens when I download a movie from the server to both of my machines and then make a backup of ALL devices - right, I have tripled my disk usage (and by the way totally deprived myself of any way to selectively delete this one file from any of the opaque, incremental backups. So indeed, every year, I end up reading most of storebackup's excellent documentation, muttering "exactly!" to myself over and over. Then I try to fit in my use case - and regularly give up. But not this year. :-)

The advantage of duplicity's local encryption is that I can store my backups on a server that is - and be it through virtual machines - offering services on the internet. I would not feel comfortable doing a full root backup to a machine that is a potential subject of compromise.

Therefore, I decided to use an old computer with a LUKS encrypted hard drive as storebackup server. All of the devices (2 computers of mine, one server and the other computer) should connect to this server and dump their backups, leaving the linking and compressing up to the remote machine.

I have three essential restraints:

  • I wish not to use any unencrypted file transfer protocol
  • I wish to seperate users from one another, that is: give read and write access to their respective backups, but do not allow peeks into other users' files.
  • recovery must be simple


The use case illustration found here seems to come close: http://www.nongnu.org/storebackup/en/node82.html

First of all, it uses NFS. I therefore replaced it by SSHFS, which, to my surprise, yielded results of up to 70MB/sec in transfer (ftp should peak at around 110MB/sec). I then created new users on the server named $MACHINE_storebackup respectively, added them to the group 'backupusers' and chrooted them to their root-owned home-directories using the OpenSSH MatchGroup directive. The local storebackup config would take THIS root-owned home-directory as the backup root and create a Series underneath it (resulting in $MACHINE_storebackup root:root -> $MACHINE_storebackup $MACHINE_storebackup:$MACHINE_storebackup).

In order to mount file filesystem on the client, I experimented with various fstab-entries. See for example:

$MACHINE_storebackup@backupserver:/ /home/$LOCALUSER/.backup fuse.sshfs noauto,x-systemd.automount,idmap=file,allow_other,nomap=error,gidfile=/home/$LOCALUSER/.ssh/gidfile,uidfile=/home/$LOCALUSER/.ssh/uidfile,_netdev,users,IdentityFile=/home/$LOCALUSER/.ssh/$LOCALUSER-backup_rsa,reconnect 0 0

On the server side, I bind-mounted the respective user owned backup-directories to /var/backup/$MACHINE_storebackup.

With this setup, I can create backups on each machine without granting any of the machines access to another machines backup and yet I can use storeBackupUpdateBackup -b /var/backups on the server side to create hard links and compress the files.

However, and this has been bugging me, recovery of the data is not that straightforward. You can see that I was tinkering with gid/uid-mappings of fuse in fstab. I have not yet found a consistent mapping (if it exists).

Consider this: I mount the backup as $LOCALUSER, yet allow_other users to use it. This is apparantly a precondition, otherwise not even the local root user will be able to even view the access rights of a folder. I am able to push my files to the server, but once storeBackupUpdateBackup has done its job, the uids/gids have changed. This could be tolerated though, if you configure the mapping to match the expected outcome AFTER storeBackupUpdateBackup has run.

Yet, and here's the catch: storeBackupUpdateBackup (properly) resets the file permissions of any previously root-owned file to root. So now, when accessing the sshfs-mount with my underprivileged $MACHINE_storebackup user, it will fail to read a file whenever it is not world-readable. I haven't tested this yet, but the same should apply to two local uid-1000-primary users. When the server does the linking, I would consider it desired behavior to link uid-1000 files of user one against uid-1000 files of user two. Yet one of those two users must have a server side uid different from 1000. So after the linking, he cannot access his own files anymore if they do not happen to be world readable by coincidence.

So I've thought about how to deal with this.

First idea: Grant the underprivileged user root in its jail. Drawback: As far as I know, if there is a way to escape a chroot, it involves root privileges inside the chroot. So I probably do not want this privilege right away.

Second idea: Set up virtual machines on the server using physical storage instead of virtual disk image files. That way, any user could have root in his VM and yet the host could do the linking. This seems like an overkill, though and is not very clean either, since I would invariably grant users the right to do whatever they see fit with their VM.

Do you have an idea on how to take advantage of storebackup's features, yet offer and clean way to seperate users from one another?

PS: I see both German and English posts in this section but I thought it might be most useful for others in English. If German is preferred, that is my mother tongue.

no realname <cryptofriend>

 

(Note: upload size limit is set to 16384 kB, after insertion of the required escape characters.)

Attach Files:
   
   
Comment:
   

No files currently attached

 

Depends on the following items: None found

Items that depend on this one: None found

 

Carbon-Copy List
  • -email is unavailable- added by hjclaes (Posted a comment)
  • -email is unavailable- added by cryptofriend (Submitted the item)
  • -email is unavailable- added by cryptofriend
  •  

    There are 0 votes so far. Votes easily highlight which items people would like to see resolved in priority, independently of the priority of the item set by tracker managers.

    Only logged-in users can vote.

     

    Follows 1 latest change.

    Date Changed by Updated Field Previous Value => Replaced by
    2015-08-04 cryptofriend Carbon-Copy- Added cryptofriend

    Back to the top

    Powered by Savane 3.13-758e.
    Corresponding source code