mgdigital

joined 1 year ago
[–] mgdigital@lemmy.world 3 points 8 months ago* (last edited 8 months ago)

Can you find something you wouldn’t find otherwise?

Yes, quite a lot of content that's otherwise difficult to find on the public trackers. Also public trackers can be shut down.

[–] mgdigital@lemmy.world 9 points 8 months ago (2 children)

4 months in my database takes up around 50GB; for the size of a few hi-res movies it's worth it for me...

[–] mgdigital@lemmy.world 26 points 8 months ago* (last edited 8 months ago)

The DHT is basically the wild west - EVERYTHING is on there (but that is also the power of it). Bitmagnet is attempting to overlay some order on it, make it more easily usable, and automatically filter the truly harmful content. Once the core features are more fleshed out, chapter 2 will hopefully look more like a fediverse with curation and moderation. There's still lots to be done but it's getting there!

[–] mgdigital@lemmy.world 2 points 1 year ago

There's a PR currently open for multi-platform builds so should have this sorted soon

[–] mgdigital@lemmy.world 2 points 1 year ago

Scraping torrent sites will be avoided is it'll be prohibitively slow and break the self-sufficiency concept - we'll infer as much as possible from the torrent meta info alone. You could have a guess at the bitrate from the file sizes. Sonarr/Radarr will already do this for you with quality profiles I think.

[–] mgdigital@lemmy.world 5 points 1 year ago (1 children)

Hi, the default port is 3333, which should be exposed if you're using the example configuration here: https://bitmagnet.io/setup/installation.html - I'm not sure what the app is in your screenshot but the provided config definitely exposes that port and is tested on Docker for Mac.

[–] mgdigital@lemmy.world 8 points 1 year ago* (last edited 1 year ago)

Hi, yep that's expected. Torrents will only move out of "Unknown" once the classifier is able to categorise them. The classifier currently only supports movie and TV show content, and can recognise these with quite high accuracy assuming a well-named torrent (and a badly named torrent is unlikely to be a high quality release). The other content types (music, games etc) can currently only be populated via an import (see the tutorial on the website). A priority feature is classifiers for other content types - however we will likely always have a lot of torrents ending up in "Unknown" given the poor naming of many crawled items. Another roadmap feature, smart deletion, could help in future with getting rid of all the rubbish whose contents cannot be inferred from the torrent name.

[–] mgdigital@lemmy.world 2 points 1 year ago

I've never used I2P but I don't see why not!

[–] mgdigital@lemmy.world 9 points 1 year ago

Hi, and thanks!

As a priority I'd like to gather some more rigorous performance benchmarks, but I can give you some hand-wavey stats now: Bitmagnet is currently fluctuating between 2-10% CPU usage on my M2 Mac Mini, and is using ~120MB of memory having currently been running for around 48 hours. Overall, the GoLang implementation seems pretty efficient to me considering how much I know is going on in the background.

Disk space usage of the database- this will be highly dependent on 2 configuration options, the first of which I've only just added in the just-released version. Copied from the configuration page of the website:

  • dht_crawler.save_files (default: true): If true, file metadata from the DHT crawler will be saved to the database. This provides more rich information about a torrent, but will use a lot more disk space. If disk space is at a premium you may want to consider disabling this.
  • dht_crawler.save_pieces (default: false): If true, the DHT crawler will save the pieces bytes from the torrent metadata. The pieces take up quite a lot of space, and aren’t currently very useful, but they may be used by future features.

For me, 24 hours of crawling uses ~2.5GB of database disk space for metadata on the ~120k torrents it has discovered. Yep, that sounds like a lot, however 90% of that is taken up with the files metadata, and could have been saved by setting dht_crawler.save_files to false. In fact I may set this to false by default and allow users to opt-in to the full-fat torrent info.

I've also imported the entire RARBG backup (the SQLite one, see tutorial on the Bitmagnet website). This, along with all the associated metadata from TMDB, took around 4GB of database space, which seems quite acceptable considering it's basically every movie and TV show. Note that this does NOT include the metadata on individual files as I described above.

A priority feature for me (detailed on website) is smart deletion - this would allow you to automatically discard a lot of data that can be automatically determined of no interest and therefore greatly reduce disk space demands.

[–] mgdigital@lemmy.world 4 points 1 year ago (3 children)

Hi, yes this is mentioned on the installation page of the website, below the Docker instructions. The app can be installed Dockerless using go install; if you choose this option you'll have to provide and configure Postgres and Redis instances for the app to connect to. That said, Docker is the recommended and easiest option.

 

I'm excited to announce the first alpha preview of this project that I've been working on for the past 4 months. I'm initially posting about this in a few small communities, and hoping to get some input from early adopters and beta testers.

What is a DHT crawler?

The DHT crawler is Bitmagnet’s killer feature that (currently) makes it unique. Well, almost unique, read on…

So what is it? You might be aware that you can enable DHT in your BitTorrent client, and that this allows you find peers who are announcing a torrent’s hash to a Distributed Hash Table (DHT), rather than to a centralized tracker. DHT’s lesser known feature is that it allows you to crawl the info hashes it knows about. This is how Bitmagnet’s DHT crawler works works - it crawls the DHT network, requesting metadata about each info hash it discovers. It then further enriches this metadata by attempting to classify it and associate it with known pieces of content, such as movies and TV shows. It then allows you to search everything it has indexed.

This means that Bitmagnet is not reliant on any external trackers or torrent indexers. It’s a self-contained, self-hosted torrent indexer, connected via the DHT to a global network of peers and constantly discovering new content.

The DHT crawler is not quite unique to Bitmagnet; another open-source project, magnetico was first (as far as I know) to implement a usable DHT crawler, and was a crucial reference point for implementing this feature. However that project is no longer maintained, and does not provide the other features such as content classification, and integration with other software in the ecosystem, that greatly improve usability.

Currently implemented features of Bitmagnet:

  • A DHT crawler
  • A generic BitTorrent indexer: Bitmagnet can index torrents from any source, not only the DHT network - currently this is only possible via the /import endpoint; more user-friendly methods are in the pipeline
  • A content classifier that can currently identify movie and television content, along with key related attributes such as language, resolution, source (BluRay, webrip etc.) and enriches this with data from The Movie Database
  • An import facility for ingesting torrents from any source, for example the RARBG backup
  • A torrent search engine
  • A GraphQL API: currently this provides a single search query; there is also an embedded GraphQL playground at /graphql
  • A web user interface implemented in Angular: currently this is a simple single-page application providing a user interface for search queries via the GraphQL API
  • A Torznab-compatible endpoint for integration with the Serverr stack

Interested?

If this project interests you then I'd really appreciate your input:

  • How did you get along with following the documentation and installation instructions? Were there any pain points?
  • There's a roadmap of high-priority features on the website - what do you see as the highest priority for near-term development?
  • If you're a developer, are you interested in contributing to the project?

Thanks for your attention. If you're interested in this project and would like to help it gain momentum then please give it a star on GitHub, and expect further updates soon!

 

I put this together at the weekend and it got some interest on the selfhosting subreddit in its final day :)

I've been following efforts to create a clone of the RARBG website, but I figured it made more sense to self host a lightweight Torznab API that can leverage the already excellent Servarr stack.

Hope someone finds it useful!

view more: next ›