this post was submitted on 09 Jan 2025
403 points (98.8% liked)

Opensource

1593 readers
769 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] FlyingSquid@lemmy.world 2 points 4 hours ago (1 children)

Now if only I could get it to play nice with my Chromecast... But I'm sure that's on Google.

[–] floquant@lemmy.dbzer0.com 3 points 1 hour ago

Or shitty mDNS implementations

[–] cupcakezealot@lemmy.blahaj.zone 36 points 15 hours ago (4 children)

accessibility is honestly the first good use of ai. i hope they can find a way to make them better than youtube's automatic captions though.

[–] jol@discuss.tchncs.de 4 points 2 hours ago

The app Be My Eyes pivoted from crowd sourced assistance to the blind, to using AI and it's just fantastic. AI is truly helping lots of people in certain applications.

[–] HK65@sopuli.xyz 8 points 5 hours ago

There are other good uses of AI. Medicine. Genetics. Research, even into humanities like history.

The problem always was the grifters who insist calling any program more complicated than adding two numbers AI in the first place, trying to shove random technologies into random products just to further their cancerous sales shell game.

The problem is mostly CEOs and salespeople thinking they are software engineers and scientists.

[–] prole@lemmy.blahaj.zone -1 points 3 hours ago (1 children)
[–] Zetta@mander.xyz 2 points 2 hours ago* (last edited 2 hours ago) (1 children)

Spoiler, they will! I use FUTO keyboard on android, it's speech to text uses an ai model and it is amazing how great it works. The model it uses is absolutely tiny compared to what a PC could run so VLC's implementation will likely be even better.

[–] Landless2029@lemmy.world 1 points 56 minutes ago

I also use FUTO and it's great. But subtitles in a video are quite different than you clearly speaking into a microphone. Even just loud music will mess with a good Speech-to-text engine let alone [Explosions] and [Fighting Noises]. At the least I hope it does pick up speech well.

[–] yonder@sh.itjust.works 8 points 14 hours ago

I know Jeff Geerling on Youtube uses OpenAIs Whisper to generate captions for his videos instead of relying on Youtube's. Apparently they are much better than Youtube's being nearly flawless. I would have a guess that Google wants to minimize the compute that they use when processing videos to save money.

[–] clutchtwopointzero@lemmy.world 8 points 14 hours ago (1 children)

I am still waiting for seek previews

[–] DepressedMan@reddthat.com 8 points 15 hours ago

Perhaps we could also get a built-in AI tool for automatic subtitle synchronization?

[–] r_deckard@lemmy.world 5 points 14 hours ago

I've been waiting for ~~this~~ break-free playback for a long time. Just play Dark Side of the Moon without breaks in between tracks. Surely a single thread could look ahead and see the next track doesn't need any different codecs launched, it's technically identical to the current track, there's no need to have a break. /rant

[–] Sunshine@lemmy.ca 18 points 19 hours ago (1 children)
[–] UnderpantsWeevil@lemmy.world 5 points 16 hours ago

Thank you for your service

[–] jagged_circle@feddit.nl 8 points 17 hours ago (6 children)
load more comments (6 replies)
[–] FundMECFSResearch@lemmy.blahaj.zone 166 points 1 day ago (3 children)

I know people are gonna freak out about the AI part in this.

But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.

[–] kautau@lemmy.world 103 points 1 day ago* (last edited 1 day ago) (1 children)

The most important part is that it’s a local ~~LLM~~ model running on your machine. The problem with AI is less about LLMs themselves, and more about their control and application by unethical companies and governments in a world driven by profit and power. And it’s none of those things, it’s just some open source code running on your device. So that’s cool and good.

[–] technomad@slrpnk.net 37 points 1 day ago (2 children)

Also the incessant ammounts of power/energy that they consume.

[–] jsomae@lemmy.ml 16 points 19 hours ago (5 children)

Running an llm llocally takes less power than playing a video game.

[–] vividspecter@lemm.ee 10 points 14 hours ago

The training of the models themselves also takes a lot of power usage.

load more comments (4 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] Kit@lemmy.blahaj.zone 6 points 17 hours ago (2 children)

This is great timing considering the recent Open Subtitles fiasco.

[–] onlinepersona@programming.dev 6 points 16 hours ago (1 children)
[–] Kit@lemmy.blahaj.zone 4 points 15 hours ago (2 children)

Open Subtitles now only allows 5 downloads per 24 hours per IP. You have to pay for more.

[–] onlinepersona@programming.dev 1 points 1 hour ago

Oof. Well, they have to make money somehow. And probably there were people abusing the site. It wouldn't surprise me for example if many did not cache the subtitles but had them on demand for videos.

Anti Commercial-AI license

[–] Blackmist 2 points 4 hours ago

Kind of annoying when searching for the exact sub file for the movie file you have.

Especially when half those subtitle files appear to be AI generated anyway, or have weird Asian gambling ads shoved in.

Glad MKV seems to be the standard now, and include subs from the original sources.

[–] vividspecter@lemm.ee 2 points 14 hours ago* (last edited 14 hours ago)
[–] moosetwin@lemmy.dbzer0.com 15 points 21 hours ago (2 children)

I don't mind the idea, but I would be curious where the training data comes from. You can't just train them off of the user's (unsubtitled) videos, because you need subtitles to know if the output is right or wrong. I checked their twitter post, but it didn't seem to help.

[–] leftytighty@slrpnk.net 16 points 19 hours ago (1 children)

subtitles aren't a unique dataset it's just audio to text

[–] nova_ad_vitum@lemmy.ca 12 points 17 hours ago (2 children)

They may have to give it some special training to be able to understand audio mixed by the Chris Nolan school of wtf are they saying.

load more comments (2 replies)
[–] Warl0k3@lemmy.world 9 points 20 hours ago

I hope they're using Open Subtitles, or one of the many academic Speech To Text datasets that exist.

[–] TheImpressiveX@lemm.ee 87 points 1 day ago (24 children)

Et tu, Brute?

VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!

Oh, so it's basically like YouTube's auto-generatedd subtitles. Never mind.

load more comments (24 replies)
[–] Evil_Shrubbery@lemm.ee 61 points 1 day ago

All hail the peak humanity levels of VLC devs.

FOSS FTW

[–] Alice@beehaw.org 32 points 1 day ago (1 children)

My experience with generated subtitles is that they're awful. Hopefully these are better, but I wish human beings with brains would make them.

[–] lime@feddit.nu 23 points 23 hours ago (5 children)

subtitling by hand takes sooooo fucking long :( people who do it really are heroes. i did community subs on youtube when that was a thing and subtitling + timing a 20 minute video took me six or seven hours, even with tools that suggested text and helped align it to sound. your brain instantly notices something is off if the subs are unaligned.

[–] onnekas@sopuli.xyz 1 points 1 hour ago

You can use tools like whishper to pre generate the subtitles. You will have pretty accurate su titles at the right times. Then you can edit the errors and maybe adjust the timings.

But I guess this workflow will work with VLC in the future as well

[–] boomzilla@programming.dev 1 points 3 hours ago

Jup. That should always be paid work. It takes forever. I tried to subtitle the first Always Sunny Episode. I got very nice results. Especially when they talked over another. But to get the perfect timing when one line was about to get hidden and the other appears was tedious af. All in all the 25 minutes cost me about the same number of hours. It's just not feasible.

load more comments (3 replies)
load more comments
view more: next ›