aboutsummaryrefslogtreecommitdiff
path: root/README.markdown
blob: 429f65378fe2763602499e40f51188b1170b9752 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
Checkem
=======

Find duplicate files efficiently, using Perl on Unix-like operating systems,
and maybe other ones too (untested). Requires only modules that have been in
Perl core since 5.6.0 at the latest.

Requires at least one directory argument:

    $ checkem .
    $ checkem ~tom ~chantelle
    $ checkem /usr /usr/local

You can install it in `/usr/local/bin` with:

    # make install

You can define a `PREFIX` to install it elsewhere:

    $ make install PREFIX="$HOME"/.local

There's a (presently) very basic test suite:

    $ make test

Q&A
---

### Can I compare sets of files rather than sets of directories?

Sure. This uses `File::Find` under the hood, which like classic UNIX `find(1)`
will still apply tests and actions to its initial arguments even if they're not
directories. This means you could do something like this to just look for
duplicate `.iso` files, provided you don't have more than `ARG_MAX`:

    $ checkem ~/media/*.iso

Or even this, for a `find(1)` that supports the `+` terminator (POSIX):

    $ find ~/media -type f -name \*.iso -exec checkem {} +

### Why is this faster than just hashing every file?

It checks the size of each file first, and only ends up hashing them if they're
the same size but have different devices and/or inode numbers (i.e. they're not
hard links). Hashing is an expensive last resort, and in many situations this
won't end up running a single hash comparison.

### I keep getting `.git` metadata files listed as duplicates.

They're accurate, but you probably don't care. Filter them out by paragraph
block. If you have a POSIX-fearing `awk`, you could do something like this:

    $ checkem /dir | awk 'BEGIN{RS="";ORS="\n\n"} !/\/.git/'

### How could I make it even quicker?

Run it on a fast disk, mostly. For large directories or large files, it will
probably be I/O bound in most circumstances.

If you end up hashing a lot of files because their sizes are the same, and
you're not worried about [SHA-1 technically being broken in practice][1], it's
a tiny bit faster:

    $ CHECKEM_ALG=sha1 checkem /dir

Theoretically, you could read only the first *n* bytes of each hash-needing
file and hash those with some suitable inexpensive function *f*, and just
compare those before resorting to checking the entire file with a safe hash
function *g*.

You'd need to decide on suitable values for *n*, *f*, and *g* in such a case;
it might be useful for very large sets of files that will almost certainly
differ in the first *n* bytes. If there's interest in this at all, I'll write
it in as optional behaviour.

Contributors
------------

* Timothy Goddard (pruby) fixed two bugs.

License
-------

Copyright (c) [Tom Ryder][2]. Distributed under an [MIT License][3].

[1]: https://shattered.io/
[2]: https://sanctum.geek.nz/
[3]: https://www.opensource.org/licenses/MIT