For LLMs, I’ve had really good results running Llama 3 in the Open Web UI docker container on a Nvidia Titan X (12GB VRAM).
For image generation tho, I agree more VRAM is better, but the algorithms still struggle with large image dimensions, ao you wind up needing to start small and iterarively upscale, which afaik works ok on weaker GPUs, but will gake problems. (I’ve been using the Automatic 1111 mode of the Stable Diffusion Web UI docker project.)
I’m on thumbs so I don’t have the links to the git repos atm, but you basically clone them and run the docker compose files. The readmes are pretty good!
No, it just goes to its extraction point! …somehow.