this post was submitted on 09 Jul 2023
8 points (100.0% liked)

LocalLLaMA

2207 readers
4 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

so i am looking to get me a gpu in my "beast"(a 24core 128gb tower with to much pci-e) i thought i might buy a used 3090 but then it hit me most applications can work with multiple gpu's so i decided i was going to go with €600 to ebay and using techpowerup i figured out there performance by looking at the memory bandwidth and fp32 performance. So this brought me to the following cards for my own LLaMa, stable-difusion and Blender: 5 Tesla K80's, 3 Tesla P40's or 2 3060's but i cant figure out what would be better for performance and future proofing. the main difference i found is in cuda version but i cant really figure out why that matters. the other thing i found is that 5 k80's are way more power intensive than 3 p40's and that if memory size is really important the p40's are the way to go but then i couldn't figure out real performance numbers as i cant find benchmarks like this one for blender.

So if anyone has a nice source for stable-diffusion and LaMA benchmarks i would appreciate it if you could share it. And if you have one of these cards or multiple and can tel me which option is better i would appreciate it if you shared your opinion

you are viewing a single comment's thread
view the rest of the comments
[–] plotting_homelab@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (4 children)

p40 I’d be glad to run a benchmark on, just tell me how.

yeah i think that's kind of the issue today there isn't really a benchmark for that kind of stuff. from what i understand the p40 is perfectly capable of running some larger models because of the 24gb. what i don't understand is you are talking fan and fan coupling what do you mean with that is that required i have a supermicro sc747 see link for example would that require more airflow trough the gpus to cool?

[–] foolsh_one@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago) (3 children)

The P40 doesn't have active cooling, it really needs forced air flow which I grabbed one of these for

https://www.ebay.com/itm/285241802202

It's even cheaper now than when I bought mine.

[–] plotting_homelab@lemmy.world 1 points 1 year ago (2 children)

o so a real server chassis provides that airflow but because of the lack of flow in towers/desktops they get overheated i get it, good to know

[–] foolsh_one@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

Correct my backplane doesn't have the flow of big server box, also another gotcha is the P40 uses a 8-pin CPU power plug not a 8-pin GPU

Edit 8 pin not 6 pin

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)