this post was submitted on 05 Sep 2023
35 points (92.7% liked)

Privacy

32177 readers
659 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] citytree@lemmy.ml 20 points 1 year ago (1 children)

Why would I use this ChatGPT thing when I can self-host Llama 2 or Falcon, which is free and open source?

[–] philoko@lemmy.ml 8 points 1 year ago (1 children)

I’m a bit out of the loop with LLMs but it depends on what you’re doing.

Last I heard, you’re going to want to use a 65b or 70b model if you want something that runs as good as GPT 3.5 but good luck with getting a GPU with enough VRAM to hold it without breaking the bank. You could offload layers to system RAM or even swap but that can come with pretty steep performance implications.

I haven’t heard of a model that’s comparable to GPT 4 but like I said, I’m pretty out of the loop. But, you’d still probably have the same VRAM and performance issues but even worse since bigger models usually is better.

All that being said, you might not need some huge model depending on what you’re doing. There’s some smaller models that can fit on consumer GPUs that can perform surprisingly well in certain situations. There’s also uncensored variants of models that won’t give you some moral lecture if you ask it for something questionable. Then there’s also the privacy aspect; I absolutely would not trust OpenAI with any personal information. I believe there’s a way to opt out of them using your personal data for training for personal accounts but you’re still trusting them with whatever information you send them.

[–] chicken@lemmy.dbzer0.com 1 points 1 year ago

I'm personally hoping the hardware mismatch issues will sort themselves out in a few years and I can wait to upgrade my GPU then.

[–] Joseph_Boom@feddit.it 14 points 1 year ago (1 children)
[–] RQG@lemmy.world 7 points 1 year ago* (last edited 1 year ago) (1 children)

I don't believe them unless they upgrade to a pinky swear.

[–] RobotToaster@infosec.pub 5 points 1 year ago (1 children)
[–] SinningStromgald@lemmy.world 3 points 1 year ago (1 children)
[–] KIM_JONG@lemmy.world 1 points 1 year ago

Somehow I read spit shake as like a spit take with a milkshake, and was very confused.

[–] adespoton@lemmy.ca 11 points 1 year ago

Weird… that’s dated August 28, but I’ve known about it since late June….

[–] dBot@midwest.social 7 points 1 year ago (1 children)

Umm, how do you train a model without data?

[–] dan@lemm.ee 4 points 1 year ago

They use data, just not the data from the customers paying them for enterprise licenses.

Honestly fear of leaking customer data is the only thing that’s kept my work from spunking every single byte of data we have at some LLM service a lazy attempt to come up with a product they can sell with minimal effort. They’re gonna love this shit.