all-in-one/community-containers/local-ai-cuda12/readme.md
Fábio C. Barrionuevo da Luz ed83c9da2b
Create local-ai-cuda12 readme.md
Signed-off-by: Fábio C. Barrionuevo da Luz <bnafta@gmail.com>
2024-12-26 10:27:08 -03:00

2 KiB

Local AI Nvidia Cuda 12

This container bundles Local AI with Nvidia GPU support via Cuda 12 and auto-configures it for you.

Notes

  • To use this, we need to have some additional environment variables configured in the Nextcloud AIO container:ENABLE_NVIDIA_GPU=true, NEXTCLOUD_ENABLE_DRI_DEVICE=true and include "local-ai-cuda12" in AIO_COMMUNITY_CONTAINERS

  • Make sure to have enough storage space available. This container alone needs ~48GB storage and probable aditional ~48GB when it update. That's ~96GB of free space. Every model that you add to models.yaml will of course use additional space which adds up quite fast.

  • After the container was started the first time, you should see a new nextcloud-aio-local-ai folder when you open the files app with the default admin user. In there you should see a models.yaml config file. You can now add models in there. Please refer here where you can get further urls that you can put in there. Afterwards restart all containers from the AIO interface and the models should automatically get downloaded by the local-ai container and activated.

  • Example for content of models.yaml (if you add all of them, it takes around 10GB additional space):

# Stable Diffusion in NCNN with c++, supported txt2img and img2img 
- url: github:mudler/LocalAI/gallery/stablediffusion.yaml
  name: Stable_diffusion

Repository

https://github.com/luzfcb/aio-local-ai

Maintainer

https://github.com/luzfcb