Wed 22 Feb 2006 07:57:27 AM UTC, original submission:
Currently, if you look in enumerator.cc:revision_enumerator::step(), you will find that when we process a merge revision, we go through both csets, and for every add_file or patch, we queue a full file or file delta transmission.
netsync.cc:queue_this_file causes this to be slightly filtered, in that we will not actually send two different ways to get a single file; if the above algorithm results in two things that would get us to the same file, the second one processed is thrown away.
However, this still results in massive sending of redundant information. The only time in which we need to send anything at all is when both csets in a merge mention a target hash. And when that happens, we should prefer to send a delta, over a full file, if possible.
The case where this behavior was discovered was when I merged from mainline to a branch that had not been propagated to recently, but had only very small changes on it. The cset against mainline had 10 files added, and 139 modified. The cset against the on-branch parent had 5 files modified. Obviously we could send only the 5 file deltas to make the new revision reconstructable... but in fact we sent almost 500 kilobytes (!), almost all of it useless.
This is a pretty huge waste of bandwidth because of these merge cases.
Proposed solution: when sending the files for a revision, look at both csets. If a given hash is not listed as new in both csets, throw it out. Now, for everything left over, if one side says "add" and the other says "delta", do a delta. If both sides say "add", do a data. If both sides say "delta", we could be clever and look at both deltas and send the smaller one... or we could just pick on arbitrarily, either way.
|