this post was submitted on 06 Jul 2023
11 points (100.0% liked)

Stable Diffusion

4305 readers
4 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS
 

I'm using the DirectML fork of A1111 because it's the only one I can get to work.

For professional reasons, I have to have an AMD gpu. I had a 6650 and was able to upgrade to the 6700xt for the extra vram, but the change has made no difference in errors telling me I'm out of vram.

I am a fairly frustrated because it seems I'm locked out of a lot of really neat and powerful features. Generating a batch of 4 images at 512x512 already takes a couple minutes. Moving that resolution at all jumps up the time considerably. I can do very little with img2img and ControlNet is effectively useless for me.

So, now that I'm done whining, is there any news about AMD improvements that might bring performance up to even a decent level with comparable Nvidia cards?

top 4 comments
sorted by: hot top controversial new old
[–] Fubarberry@lemmy.fmhy.ml 3 points 1 year ago (1 children)

You should be getting better performance that that. I have a 6600 xt and it'll generate a 512x512 image in about 7 seconds. Taking several minutes for a batch of 4 is a lot slower than you should be getting.

I will admit there are definitely VRAM issues for doing higher resolutions or some control net types (depth for example).

[–] IntheTreetop@lemm.ee 2 points 1 year ago

Thanks. It is very likely I've screwed it up somehow. I tried other things and they would only use cpu, but I know for sure that isn't the case this time.

I'll try and figure out what the issue is.

[–] turbodrooler@lemmy.world 3 points 1 year ago

Way better performance on Linux using ROCm. I used this (flawed) tutorial. https://youtu.be/2XYbtfns1BU I now have a Linux install on an external SSD that I boot into to use A1111. I’m using Zeroscope as well. No issues so far. 6800xt.

[–] KiranWells@pawb.social 1 points 1 year ago

I would guess it's one of three things:

  1. You are using Windows instead of Linux for ROCm (I don't know how much this affects performance, as I am only on Linux)
  2. You are generating the batch all at the same time, instead of just doing multiple generations. This can lead to out-of-memory issues, as it is a larger image being generated.
  3. You are not fully using the GPU. Does task manager say 100% utilization?