this post was submitted on 26 Jul 2023
8 points (100.0% liked)

LocalLLaMA

2244 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] wagesj45@kbin.social 1 points 1 year ago (1 children)

I have a feeling that this is going to go similarly to Stable Diffusion's big 2.0 flop. SD put its limits in through training data. Meta put in its limits via terms and conditions. The end result for both will still be that the community gravitates toward what is usable with the most freedom attached to it. The most annoying part of the TOS is that you can't use the output to improve other models.

Fuck you Meta, I wanna make a zillion baby specialist models.

[–] rufus@discuss.tchncs.de 3 points 1 year ago (1 children)

Well, i've had other arguments about OpenAI prohibiting use to improve other models.... I'm not sure. My concept of what's right and what is wrong kind of contradicts Meta or OpenAI just using copyrighted content to train their models and then claiming copyright and banning me from using that for the same purpose.

[–] wagesj45@kbin.social 2 points 1 year ago

Good point. I think I'll do whatever I want with it and just keep my trap shut. Good luck proving anything Zuck.

[–] Naked_Yoga@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

I used it and was not impressed... I found Wizard LM to be far superior.

Also, I agree with @wagesj45 up there about training other models... but how would they detect that you're training other models with it? I think one of the best things you can do with a large model is to train a small specialist model.

[–] noneabove1182@sh.itjust.works 1 points 1 year ago

People may not love the model or its outputs, but it's hard to deny the impact to the open-source community that releases like this bring, such a positive bonus and really happy they're continuing

load more comments
view more: next ›