I've a rx 6650 xt and I generally use llama.cpp with the ROCm patch (tested up to commit ac7876ac20124a15a44fd6317721ff1aa2538806).
It works great with around 25 layers moved to the GPU for my 8GB card. 18, if you want to do something else GPU related (like watching a HW Accelerated video).
To be fair it's a long time now that I don't update llama.cpp and it had gone through a lot of changes in the meantime, like the addition of the LLAMA_CUDA_DMMV_X
, LLAMA_CUDA_DMMV_Y
and LLAMA_CUDA_KQUANTS_ITER
parameters, so your mileage may vary and it's possible you'll have to manually modify the PR before merging it in, so not really an easy one click experience for the best performance.
It currently doesn't support SuperHot or similar techniques, mainly because there's a really big push on new ones each day, and they are waiting to see which will be the real winner.
But I went a bit too much off-topic. I think the easiest, as the other commenter said, is to just go with kobold.cpp. I personally didn't have a good experience working with text-generation-webui, but a lot of people swear by it.