this post was submitted on 02 Aug 2023
3 points (100.0% liked)

TechTakes

1435 readers
188 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Running llama-2-7b-chat at 8 bit quantization, and completions are essentially at GPT-3.5 levels on a single 4090 using 15gb VRAM. I don't think most people realize just how small and efficient these models are going to become.

[cut out many, many paragraphs of LLM-generated output which prove… something?]

my chatbot is so small and efficient it only fully utilizes one $2000 graphics card per user! that’s only 450W for as long as it takes the thing to generate whatever bullshit it’s outputting, drawn by a graphics card that’s priced so high not even gamers are buying them!

you’d think my industry would have learned anything at all from being tricked into running loud, hot, incredibly power-hungry crypto mining rigs under their desks for no profit at all, but nah

not a single thought spared for how this can’t possibly be any more cost-effective for OpenAI either; just the assumption that their APIs will somehow always be cheaper than the hardware and energy required to run the model

you are viewing a single comment's thread
view the rest of the comments
[–] al177@lemmy.sdf.org 0 points 1 year ago (1 children)

It's a shame that analog inference accelerators are taking so long to hit the market. GPUs are way too expensive and power hungry for inference when you don't need the ability to train a network.

[–] self@awful.systems 1 points 1 year ago

oh totally, upgrading from GPUs to ASICs will really increase my ~~hash rate~~ ~~mining profits~~ number of concurrent conversations