Let's say I've downloaded big file using torrent. Then add very small file and recreate new torrent file. Like subtitle.
Now two torrent files are totally different file to machine. Tracker and torrent client would treat them different torrent. Of course we don't need duplicate original data file for multi seeding. But seeders and leechers split by two torrent file. They don't know about they have exact same file. Torrent client and tracker cannot connect people for exact same data. We have split share pool for exact same file. It's not efficient. More seeders, more speed.
Let's say original torrent file is 1.torrent
.
[ file1 ]
Now I add some file and make new torrent file 2.torrent
that looks like,
[
[ file1 ] => This is 1.torrent.
+ file2
] => This is 2.torrent
Another person got reached 2.torrent
. Hey maybe create new torrent file based 2.torrent
. So we got 3.torrent
.
[
[
[ file1 ] => This is 1.torrent.
+ file2
] => This is 2.torrent
+ file3
] => This is 3.torrent
So if you got 3.torrent
, you are in same share pool with 1.torrent
, 2.torrent
people.
What if there is 4.torrent
, 5.torrent
or so more in near future?
We maybe query to torrent search engine or DHT, PEX.
"Please give me torrent list based on
3.torrent
"
If there is new interesting torrent, we can upgrade 3.torrent
-> X.torrent
. We don't need any interaction to local files. Only added files will be downloaded.
If you know about source code management tools like git
, this idea is basically 'git repoisitory in one torrent file'.
git init
make 1.torrent
git commit
make 2.torrent
git commit
...
- Torrent file can contain another torrent file.
- We can keep seeder/leecher pool big as possible as. Don't split us if we have exact same contents.
- If there is other torrents based on particular torrent, we can discover them.
That's the key points.
How this idea can be real? Is that possible?
Possible? Yes. But it has been proposed before and there are several (relevant) edge cases that complicate the design.
Additionally real world requirements don't really consist of incremental file adding but of creating individual torrents and batching them at a later point, which would require a multi-way merge.
Instead of trying to twist the torrent format itself to support it an external facility allowing covered-swarm-discovery which is orthogonal to the torrent itself might be simpler, but the question still would be whether the gain are worth it. Torrents have a natural lifecycle anyway, the bigger ones live longer, the smaller ones die off, so users already tend to migrate in that direction.