cross-posted from: https://lemmy.dbzer0.com/post/21328454
PGSub - A Giant Archive of Subtitles For Everyone
I've been working on this subtitle archive project for some time. It is a Postgres database along with a CLI and API application allowing you to easily extract the subs you want. It is primarily intended for encoders or people with large libraries, but anyone can use it!
PGSub is composed from three dumps:
- opensubtitles.org.Actually.Open.Edition.2022.07.25
- Subscene V2 (prior to shutdown)
- Gnome's Hut of Subs (as of 2024-04)
As such, it is a good resource for films and series up to around 2022.
Some stats (copied from README):
- Out of 9,503,730 files originally obtained from dumps, 9,500,355 (99.96%) were inserted into the database.
- Out of the 9,500,355 inserted, 8,389,369 (88.31%) are matched with a film or series.
- There are 154,737 unique films or series represented, though note the lines get a bit hazy when considering TV movies, specials, and so forth. 133,780 are films, 20,957 are series.
- 93 languages are represented, with a special '00' language indicating a .mks file with multiple languages present.
- 55% of matched items have a FPS value present.
Once imported, the recommended way to access it is via the CLI application. The CLI and API can be compiled on Windows and Linux (and maybe Mac), and there also pre-built binaries available.
The database dump is distributed via torrent (if it doesn't work for you, let me know), which you can find in the repo. It is ~243 GiB compressed, and uses a little under 300 GiB of table space once imported.
For a limited time I will devote some resources to bug-fixing the applications, or perhaps adding some small QoL improvements. But, of course, you can always fork them or make or own if they don't suit you.
Why is it taking so much space in compressed form? I think text compresses very well so you should be able to save tons of space compared to db tables