this post was submitted on 29 Jul 2023
39 points (100.0% liked)
Stable Diffusion
4304 readers
14 users here now
Discuss matters related to our favourite AI Art generation technology
Also see
Other communities
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Looks cool! If it's browser based, have you got any plans to dockerize it?
Yes! That is the very next big feature to tackle after just adding MacOS support (and the surprise that was needing to add SDXL support.) I've been trying to weave between addressing bug reports and feature requests while also trying to understand what hardware people are actually trying to use - It seems like I've covered the vast majority of use cases for casual tinkerers and self-hosters, now it's time to make the docker build for the advanced users and individuals wanting to run this on a remote server.
In theory, the portable installation should "just work" in Docker, though the Nvidia runtime could cause troubles - but I'll start publishing Docker containers to the repository starting with 0.2.1.
Thank you for the feedback!
Sounds good, looking forwards to trying it! Personally I like to use docker on my Linux desktop PC for web server based apps. Makes it easy to run and update everything without having to rely on custom installers and updaters. Usually gives better control over which port to use, and where to store data. Been using AbdBarho's docker files for A1111 and ComfyUI, which makes it very easy to share models and other large files between the two.
I've used cuda in docker quite a lot, and it has even helped me solve problems, e.g. some llama apps needed cuda toolkit, which was't available for Fedora 38. I think the biggest challenge with docker is to make sure the right dependencies get built into the image, and that all run-time data is contained to mounted volumes. If you need any help with docker let us know, I'm not some kind of super pro but I have a fair amount of experience with it.
If you're collecting info about users' hardware, I have a Ryzen 7 7700X, 32GB ram, RTX3080 12GB vram.
Hi! The docker version is out! 😁 Just run
docker pull ghcr.io/painebenjamin/app.enfugue.ai:latest
to get it. There's some more documentation on ports, volumes, etc. on the wiki.