I guess at this point I should give up on this approach and try another setup like this one https://github.com/basujindal/…
| modified | Thursday 27 November 2025 |
|---|
🖼️ archlinux AI stable_diffusion
yay -S anacondasource /opt/anaconda/bin/activate rootconda install pytorch==1.12.1 torchvision==0.13.1 -c pytorchpip install transformers==4.19.2 diffusers invisible-watermarkERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
daal4py 2021.6.0 requires daal==2021.4.0, which is not installed.
numba 0.55.1 requires numpy<1.22,>=1.18, but you have numpy 1.24.1 which is incompatible.
pip install daal==2021.4.0pip install -e .sudo pacman -S cudaexport CUDA_HOME=/opt/cuda
sudo conda install -c nvidia/label/cuda-11.4.0 cuda-nvcc
sudo conda install -c conda-forge gcc
sudo conda install -c conda-forge gxx_linux-64==9.5.0
EnvironmentNotWritableError: The current user does not have write permissions to the target environment.
environment location: /opt/anaconda
uid: 1000
gid: 1000
git submodule update --init --recursive
pip install -r requirements.txt
sudo pip install -e .
ModuleNotFoundError: No module named 'torch' so I ran sudo conda install pytorch then sudo pip install -e .pip install omegaconf
pip install torchvision
pip install pytorch_lightning
pip install pytorch-lightning
pip install open_clip_torch
sudo python setup.py build developpip install triton==2.0.0.dev20221120output and ranpython scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt v2-1_768-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768 --outdir output
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Killed
python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt v2-1_768-ema-pruned.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --ddim_eta 0.0 --n_samples 3 --n_iter 3 --scale 5.0 --steps 100 --H 192 --W 192 --outdir output
gh repo clone basujindal/stable-diffusionmv ../stablediffusion/v2-1_768-ema-pruned.ckpt ../sd-data/model.ckptyay -S nvidia-container-toolkitno-cgroups = false in /etc/nvidia-container-runtime/config.tomlsudo systemctl restart dockerbmake so that’s another 4gb or downloads and 14gb installation side. 🥲docker-compose builddocker-composer rundocker-composer run again.



