Wed 01 Feb 2006 11:39:31 PM UTC, comment #11:
Final version of the patch, successfully hashed a 13GB file filled
with /dev/urandom content and compared the hashes with eMule 0.47a and Shareaza 2.2.1.0. Both EDK (eMule and Shareaza) and bitprint hashes matched.
|
Wed 01 Feb 2006 08:35:51 AM UTC, comment #10:
Updated patch, fixed Stack_overflow in TigerTree calculation
With this patch I could calculate all hashes for a 4.2GB file
urn:bitprint:WUOZEZC5FDS2VKTQYHGIJJHTLDU2UC65.KLQIJ2BO4LQIBJWGQ4XHEIG7PCUHE2RQOAXDFVA
ed2k://|file|mlnet4|4404019200|BACCD5062FA8F2E391E7DBE41D6A6047|/
sig2dat://|File: mlnet4|Length: 4404019200 Bytes|UUHash: =Z7AwGCjoVKBJqA5VMY9kTf9P+/8=|/
Hash: 67b0301828e854a049a80e55318f644dff4ffbff
Now I have to test if those hashes are correct...
|
Tue 31 Jan 2006 10:18:50 AM UTC, comment #9:
Attached you will find a patch, its not complete though.
- implement SIGINT and SIGTERM signals, mld_hash can now be killed with CTRL+C
- producing an ed2k hash for a 4.2GB succeeded (to be checked if that hash matches with eMule 0.47a)
dd if=/dev/urandom of=mlnet4 bs=$(expr 1024 \* 1024) count=$(expr 4196 + 4)
-rw-r--r-- 1 root root 4403494384 31. Jan 11:01 mlnet4
Partial 452/ 453 : 314D858AE85384F2A08B362DDFCF67E6
ed2k://|file|mlnet4|4403494384|162F88B99247E9DCF0CC1736F5E91298|/
- calculating a TigerTree fails on the 4.2GB file but works on a 1GB file:
Fatal error: exception Stack_overflow
|
Tue 31 Jan 2006 10:13:09 AM UTC, comment #8:
Why does mld_hash.bitprint_file calculate TigerTree hashes twice?
First in
let tiger = TigerTree.digest_subfile fd zero file_size in
and second via
let tiger2 = tiger_of_array chunks in
which produces the same values for a 1GB file:
Calculating TigerTree
Calculating SHA1
urn:bitprint:FITASGTBCKHEPZAAFTISEDE767SEJUCA.AYPHKYBQUFCGJG3GG6GCBTAKRIWBOPSRX5D2W7Y
urn:bitprint:FITASGTBCKHEPZAAFTISEDE767SEJUCA.AYPHKYBQUFCGJG3GG6GCBTAKRIWBOPSRX5D2W7Y
|
Sat 24 Dec 2005 03:10:54 PM UTC, comment #7:
Hello,
I just went into this issue again. It changed a bit, I mean, the files may be larger now, but:
$ ./mld_hash -hash bp -check 2500000
[SNIP]
Partial 261 : D7DEF262A127CD79096A108E7A9FC138 Partial 262 : D7DEF262A127CD79096A108E7A9FC138 Partial 263 : 716F42D4E77FC73DBDF28094F76FBD1C ed2k://|file|test.diskfile.2500000|2560000000|8B14ABADF1DA224AF04D1FFCB75E63CF| Computing bitprint hash Killed
I let the command run for about 24 houres.
Also note: This does not only happen when running the -check function, but when calculating "real" files has well, the sizes were: 4076548252 and 3924777725.
Note #2: Running the -check function, the "test.diskfile.*" is always two times larger than the specified size, e.g. in the case of the above command:
$ ls -al test.*
-rw-r--r-- 1 mldonkey mldonkey 5120000000 2005-12-21 20:25 /tmp/test.diskfile.2500000
Maybe, there is a problem with my compilation environment? I haven't found no trace about using "long long" in the ./configure log. I'm using Debian Sarge now:
$ gcc --version
gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)
$ /usr/bin/ocamlc.opt -v
The Objective Caml compiler, version 3.08.3
File: mld_hash.ml Status: Up-to-date
Working revision: 1.5
Repository revision: 1.5 /cvsroot/mldonkey/mldonkey/tools/mld_hash.ml,v
mlnet -version
MLNet 2.7.1.CVS: Multi-Network p2p client (Global Shares Gnutella G2 Fasttrack FileTP BitTorrent Donkey)
|
Mon 24 Oct 2005 09:39:45 PM UTC, comment #6:
Quick solution would be to use unsigned long...
...
Partial 109 : D7DEF262A127CD79096A108E7A9FC138
Partial 110 : 8E4AF1776F86272799C3324F9AF93A6E
ed2k://|file|mlnet|1077936128|8EF4E4C0E746720ADA08A0DC0D0E4658|/
sig2dat://|File: /tmp/mlnet|Length: 1077936128 Bytes|UUHash: =kK7e2ZIs+JRup4WGNUk3JP9P+/8=|/
Hash: 90aeded9922cf8946ea7858635493724ff4ffbff
urn:bitprint:ALKVTLHZB4RJZXTZ2SX4HBJKOCKRWMO3.HS6IGQE5E23KZQT6Q3QDX7O6L4FDMLQMJQO3PUI
Partial 0 : MUACEID6UTVUKTRE2MTZKOPTZTMS6A2OF6B4ZNY
...
Partial 1027 : MUACEID6UTVUKTRE2MTZKOPTZTMS6A2OF6B4ZNY
urn:bitprint:ALKVTLHZB4RJZXTZ2SX4HBJKOCKRWMO3.HS6IGQE5E23KZQT6Q3QDX7O6L4FDMLQMJQO3PUI
|
Mon 24 Oct 2005 07:56:52 PM UTC, comment #5:
Quoted from bug #14851:
For testing purpose, I ran these commands:
$ cd /tmp
$ dd if=/dev/zero of=mlnet bs=$(expr 1024 \* 1024) count=$(expr 1024 + 4)
$ /.../mld_hash -partial ./mlnet
[...]
Partial 109 : D7DEF262A127CD79096A108E7A9FC138
Partial 110 : 8E4AF1776F86272799C3324F9AF93A6E
ed2k://|file|mlnet|1077936128|8EF4E4C0E746720ADA08A0DC0D0E4658|/
sig2dat://|File: ./mlnet|Length: 1077936128 Bytes|UUHash: =kK7e2ZIs+JRup4WGNUk3JP9P+/8=|/
Hash: 90aeded9922cf8946ea7858635493724ff4ffbff
Then, nothing more
|
Mon 24 Oct 2005 07:19:58 PM UTC, comment #4:
I re-checked my problem again with a file of the size:
1992265570 bytes
and File: mld_hash.ml Status: Up-to-date
Working revision: 1.5
Repository revision: 1.5 /cvsroot/mldonkey/mldonkey/tools/mld_hash.ml,v
After some time mld_hash stalls (I let it run for more than one hour), consumes 100% available CPU, strace does not print anything (I let it run for more than 30 minutes); and needs to be killed by SIGKILL.
The line missing is the one(s) of urn:bitprint.
|
Wed 24 Aug 2005 07:13:09 PM UTC, comment #3:
On mingw, off_t is long. This causes files > 2 gig to fail in os_lseek.
This updated patch uses off64_t for mingw.
Tested mld 2.6.3 on linux/cygwin/mingw, with 2 and 4.5 gig files.
|
Wed 24 Aug 2005 05:16:49 AM UTC, comment #2:
I receive:
Fatal error: exception Unix.Unix_error(12, "os_read", "")
when trying to ed2k_hash 2+ gig files at the bitprint part (sha1 and tigertree). I also think that I noticed an infinite loop in tiger_block_size when the int is overflowed.
This is obviously all due to the use of ints.
Attached is a patch I used to switch to size_t and tested on small files, 2 gig files, and 4.5 gig files. All seem to work for me now. Tested on linux and cygwin (haven't tried mingw) with mld 2.6.3.
The hash results still match bitzi bitcollider and ed2k-tools.sf.net's ed2khash app, but both of those apps fail on files > 2 gigs. I don't know of another app with which to compare.
|
Sun 10 Jul 2005 01:19:15 PM UTC, comment #1:
I forgot to mention that ed2k_hash has to be terminated by SIGKILL, neither 15 nor 2 nor 1 is acted on.
|
Sun 10 Jul 2005 01:15:21 PM UTC, original submission:
Hello,
I'm running
ed2k_hash "file" >file.hash 2>&1
for all my files. For several files this command never completes, the files are, most of the time large, 1GB or more. I noticed this behaviour before with elder releases, but had time right now to investigate it further.
ed2k_hash is running and outputs:
Current locale of the target machine is ANSI_X3.4-1968
Current language of the target machine is EN
ed2k://|<<snip>>|/
sig2dat://|<<snip>>|/
Hash: <<snip>>
then the program stalls, the urn_biprint is missing. It now runs for 2h30m real time and 79m CPU time, it allocates all remaining CPU (Idle percentage is approx 0), I have been running "strace -olog -p<<pid>>" for 20m now, the "log" file empty. The file in question is ~1.1GB.
BTW: Why is "Current ..." displayed on stdout, but the hashes themselves on stderr?
|