mirror of
https://github.com/nextcloud/all-in-one.git
synced 2026-02-16 18:50:20 +00:00
Merge branch 'main' into ench/noid/vulkan-ai
Signed-off-by: Jean-Yves <7360784+docjyJ@users.noreply.github.com>
This commit is contained in:
commit
5d30f0f82d
184 changed files with 3865 additions and 1690 deletions
|
|
@ -5,7 +5,7 @@ This container allows to view the local borg repository in a web session. It als
|
|||
- After adding and starting the container, you need to visit `https://ip.address.of.this.server:5801` in order to log in with the user `nextcloud` and the password that you can see next to the container in the AIO interface. (The web page uses a self-signed certificate, so you need to accept the warning).
|
||||
- Then, you should see a terminal. There type in `borg mount /mnt/borgbackup/borg /tmp/borg` to mount the backup archive at `/tmp/borg` inside the container. Afterwards type in `nautilus /tmp/borg` which will show a file explorer and allows you to see all the files. You can then copy files and folders back to their initial mountpoints inside `/nextcloud_aio_volumes/`, `/host_mounts/` and `/docker_volumes/`. ⚠️ Be very carefully while doing that as can break your instance!
|
||||
- After you are done with the operation, click on the terminal in the background and press `[CTRL]+[c]` multiple times to close any open application. Then run `umount /tmp/borg` to unmount the mountpoint correctly.
|
||||
- You can also delete specific archives by running `borg list`, delete a specific archive e.g. via `borg delete --stats --progress "::20220223_174237-nextcloud-aio"` and compact the archives via `borg compact`. After doing so, make sure to update the backup archives list in the AIO interface! You can do so by clicking on the `Check backup integrity` button or `Create backup` button.
|
||||
- You can also delete specific archives by running `borg list`, delete a specific archive e.g. via `borg delete --stats --progress "::20220223_174237-nextcloud-aio"` and compact the archives via `borg compact`. After doing so, make sure to update the backup archives list in the AIO interface! You can do so by clicking on the `Update backup list` button in the `Update backup list` section inside the `Backup and restore` section.
|
||||
- ⚠️ After you are done doing your operations, remove the container for better security again from the stack: https://github.com/nextcloud/all-in-one/tree/main/community-containers#how-to-remove-containers-from-aios-stack
|
||||
- See https://github.com/nextcloud/all-in-one/tree/main/community-containers#community-containers how to add it to the AIO stack
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
"display_name": "Caddy with geoblocking",
|
||||
"documentation": "https://github.com/nextcloud/all-in-one/tree/main/community-containers/caddy",
|
||||
"image": "ghcr.io/szaimen/aio-caddy",
|
||||
"image_tag": "v2",
|
||||
"image_tag": "v4",
|
||||
"internal_port": "443",
|
||||
"restart": "unless-stopped",
|
||||
"ports": [
|
||||
|
|
@ -13,17 +13,14 @@
|
|||
"ip_binding": "",
|
||||
"port_number": "443",
|
||||
"protocol": "tcp"
|
||||
},
|
||||
{
|
||||
"ip_binding": "",
|
||||
"port_number": "443",
|
||||
"protocol": "udp"
|
||||
}
|
||||
],
|
||||
"environment": [
|
||||
"TZ=%TIMEZONE%",
|
||||
"NC_DOMAIN=%NC_DOMAIN%",
|
||||
"APACHE_PORT=%APACHE_PORT%"
|
||||
"APACHE_PORT=%APACHE_PORT%",
|
||||
"APACHE_IP_BINDING=%APACHE_IP_BINDING%",
|
||||
"NEXTCLOUD_EXPORTER_CADDY_PASSWORD=%NEXTCLOUD_EXPORTER_CADDY_PASSWORD%"
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
|
|
@ -37,9 +34,14 @@
|
|||
"writeable": false
|
||||
}
|
||||
],
|
||||
"secrets": [
|
||||
"NEXTCLOUD_EXPORTER_CADDY_PASSWORD"
|
||||
],
|
||||
"aio_variables": [
|
||||
"apache_ip_binding=@INTERNAL",
|
||||
"apache_port=11000"
|
||||
"apache_port=11000",
|
||||
"turn_domain=%NC_DOMAIN%",
|
||||
"talk_port=443"
|
||||
],
|
||||
"nextcloud_exec_commands": [
|
||||
"mkdir '/mnt/ncdata/admin/files/nextcloud-aio-caddy'",
|
||||
|
|
|
|||
|
|
@ -1,19 +1,24 @@
|
|||
## Caddy with geoblocking
|
||||
This container bundles caddy and auto-configures it for you. It also covers [vaultwarden](https://github.com/nextcloud/all-in-one/tree/main/community-containers/vaultwarden) by listening on `bw.$NC_DOMAIN`, if installed. It also covers [stalwart](https://github.com/nextcloud/all-in-one/tree/main/community-containers/stalwart) by listening on `mail.$NC_DOMAIN`, if installed. It also covers [jellyfin](https://github.com/nextcloud/all-in-one/tree/main/community-containers/jellyfin) by listening on `media.$NC_DOMAIN`, if installed. It also covers [lldap](https://github.com/nextcloud/all-in-one/tree/main/community-containers/lldap) by listening on `ldap.$NC_DOMAIN`, if installed. It also covers [nocodb](https://github.com/nextcloud/all-in-one/tree/main/community-containers/nocodb) by listening on `tables.$NC_DOMAIN`, if installed. It also covers [jellyseerr](https://github.com/nextcloud/all-in-one/tree/main/community-containers/jellyseerr) by listening on `requests.$NC_DOMAIN`, if installed.
|
||||
This container bundles caddy and auto-configures it for you. It also covers [vaultwarden](https://github.com/nextcloud/all-in-one/tree/main/community-containers/vaultwarden) by listening on `bw.$NC_DOMAIN`, if installed. It also covers [stalwart](https://github.com/nextcloud/all-in-one/tree/main/community-containers/stalwart) by listening on `mail.$NC_DOMAIN`, if installed. It also covers [jellyfin](https://github.com/nextcloud/all-in-one/tree/main/community-containers/jellyfin) by listening on `media.$NC_DOMAIN`, if installed. It also covers [lldap](https://github.com/nextcloud/all-in-one/tree/main/community-containers/lldap) by listening on `ldap.$NC_DOMAIN`, if installed. It also covers [nocodb](https://github.com/nextcloud/all-in-one/tree/main/community-containers/nocodb) by listening on `tables.$NC_DOMAIN`, if installed. It also covers [jellyseerr](https://github.com/nextcloud/all-in-one/tree/main/community-containers/jellyseerr) by listening on `requests.$NC_DOMAIN`, if installed. It also covers [nextcloud-exporter](https://github.com/nextcloud/all-in-one/tree/main/community-containers/nextcloud-exporter) by listening on `metrics.$NC_DOMAIN`, if installed.
|
||||
|
||||
### Notes
|
||||
- This container is incompatible with the [npmplus](https://github.com/nextcloud/all-in-one/tree/main/community-containers/npmplus) community container. So make sure that you do not enable both at the same time!
|
||||
- Make sure that no other service is using port 443 on your host as otherwise the containers will fail to start. You can check this with `sudo netstat -tulpn | grep 443` before installing AIO.
|
||||
- Make sure that no other service is using port 443/tcp on your host as otherwise the containers will fail to start. You can check this with `sudo netstat -tulpn | grep 443` before installing AIO.
|
||||
- Starting with AIO v12, the Talk port that was usually exposed on port 3478 is now set to port 443 udp and tcp and reachable via `your-nc-domain.com`. For the changes to become activated, you need to go to `https://your-nc-domain.com/settings/admin/talk` and delete all turn and stun servers. Then restart the containers and the new config should become active.
|
||||
- Starting with AIO v12, you can also limit vaultwarden, stalwart and lldap to certain ip-addresses. You can do so by creating a `allowed-IPs-vaultwarden.txt`, `allowed-IPs-stalwart.txt`, or `allowed-IPs-lldap.txt` file in the `nextcloud-aio-caddy` directory of your admin user and adding the ip-addresses in these files.
|
||||
- The container also supports the proxy protocol inside caddy. That means that you can run a supported web server in front of port 443/tcp and use the proxy protocol. You can enable this by configuring the `APACHE_IP_BINDING` environmental variable for the mastercontainer and set it to an ip-address from which the protocol shall be accepted. ⚠️ Note that the initial domain validation will not work correctly if you want to use the proxy protocol. So make sure to skip the domain validation in that case. See the [documentation](https://github.com/nextcloud/all-in-one#how-to-skip-the-domain-validation).
|
||||
- If you want to use this with [vaultwarden](https://github.com/nextcloud/all-in-one/tree/main/community-containers/vaultwarden), make sure that you point `bw.your-nc-domain.com` to your server using a cname record so that caddy can get a certificate automatically for vaultwarden.
|
||||
- If you want to use this with [stalwart](https://github.com/nextcloud/all-in-one/tree/main/community-containers/stalwart), make sure that you point `mail.your-nc-domain.com` to your server using an A, AAAA or CNAME record so that caddy can get a certificate automatically for stalwart.
|
||||
- If you want to use this with [jellyfin](https://github.com/nextcloud/all-in-one/tree/main/community-containers/jellyfin), make sure that you point `media.your-nc-domain.com` to your server using a cname record so that caddy can get a certificate automatically for jellyfin.
|
||||
- If you want to use this with [lldap](https://github.com/nextcloud/all-in-one/tree/main/community-containers/lldap), make sure that you point `ldap.your-nc-domain.com` to your server using a cname record so that caddy can get a certificate automatically for lldap.
|
||||
- If you want to use this with [nocodb](https://github.com/nextcloud/all-in-one/tree/main/community-containers/nocodb), make sure that you point `tables.your-nc-domain.com` to your server using a cname record so that caddy can get a certificate automatically for nocodb.
|
||||
- If you want to use this with [jellyseerr](https://github.com/nextcloud/all-in-one/tree/main/community-containers/jellyseerr), make sure that you point `requests.your-nc-domain.com` to your server using a cname record so that caddy can get a certificate automatically for jellyseerr.
|
||||
- If you want to use this with [nextcloud-exporter](https://github.com/nextcloud/all-in-one/tree/main/community-containers/nextcloud-exporter), make sure that you point `metrics.your-nc-domain.com` to your server using a cname record so that caddy can get a certificate automatically for nextcloud-exporter.
|
||||
- If you want to use this with [local AI with vulkan support](https://github.com/nextcloud/all-in-one/tree/main/community-containers/local-ai-vulkan), make sure that you point `ai.your-nc-domain.com` to your server using a cname record so that caddy can get a certificate automatically for local AI.
|
||||
- After the container was started the first time, you should see a new `nextcloud-aio-caddy` folder and inside there an `allowed-countries.txt` file when you open the files app with the default `admin` user. In there you can adjust the allowed country codes for caddy by adding them to the first line, e.g. `IT FR` would allow access from italy and france. Private ip-ranges are always allowed. Additionally, in order to activate this config, you need to get an account at https://dev.maxmind.com/geoip/geolite2-free-geolocation-data and download the `GeoLite2-Country.mmdb` and upload it with this exact name into the `nextcloud-aio-caddy` folder. Afterwards restart all containers from the AIO interface and your new config should be active!
|
||||
- You can add your own Caddy configurations in `/data/caddy-imports/` inside the Caddy container (`sudo docker exec -it nextcloud-aio-caddy bash`). These will be imported on container startup. **Please note:** If you do not have CLI access to the server, you can now run docker commands via a web session by using this community container: https://github.com/nextcloud/all-in-one/tree/main/community-containers/container-management
|
||||
- See https://github.com/nextcloud/all-in-one/tree/main/community-containers#community-containers how to add it to the AIO stack
|
||||
- If you want to remove the container again and revert back to the default, you need to disable the container via the AIO-interface and follow https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md#8-removing-the-reverse-proxy
|
||||
|
||||
### Repository
|
||||
https://github.com/szaimen/aio-caddy
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
This container packages calcardbackup which is a tool that exports calendars and addressbooks from Nextcloud to .ics and .vcf files and saves them to a compressed file.
|
||||
|
||||
### Notes
|
||||
- Backups will be created at 00:00 CEST every day. Make sure that this does not conflict with the configured daily backups inside AIO.
|
||||
- Backups will be created at 00:00 UTC every day. Make sure that this does not conflict with the configured daily backups inside AIO.
|
||||
- All the exports will be included in AIOs backup solution
|
||||
- You can find the exports in the nextcloud_aio_calcardbackup volume
|
||||
- See https://github.com/nextcloud/all-in-one/tree/main/community-containers#community-containers how to add it to the AIO stack
|
||||
|
|
|
|||
|
|
@ -10,18 +10,21 @@
|
|||
"restart": "unless-stopped",
|
||||
"environment": [
|
||||
"TZ=%TIMEZONE%",
|
||||
"API_KEY=some-super-secret-api-key",
|
||||
"API_KEY=%FACERECOGNITION_API_KEY%",
|
||||
"FACE_MODEL=3"
|
||||
],
|
||||
"aio_variables": [
|
||||
"nextcloud_memory_limit=2048M"
|
||||
],
|
||||
"secrets": [
|
||||
"FACERECOGNITION_API_KEY"
|
||||
],
|
||||
"enable_nvidia_gpu": false,
|
||||
"nextcloud_exec_commands": [
|
||||
"php /var/www/html/occ app:install facerecognition",
|
||||
"php /var/www/html/occ app:enable facerecognition",
|
||||
"php /var/www/html/occ config:system:set facerecognition.external_model_url --value nextcloud-aio-facerecognition:5000",
|
||||
"php /var/www/html/occ config:system:set facerecognition.external_model_api_key --value some-super-secret-api-key",
|
||||
"php /var/www/html/occ config:system:set facerecognition.external_model_api_key --value %FACERECOGNITION_API_KEY%",
|
||||
"php /var/www/html/occ face:setup -m 5",
|
||||
"php /var/www/html/occ face:setup -M 1G",
|
||||
"php /var/www/html/occ config:app:set facerecognition analysis_image_area --value 4320000",
|
||||
|
|
|
|||
16
community-containers/languagetool/languagetool.json
Normal file
16
community-containers/languagetool/languagetool.json
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
{
|
||||
"aio_services_v1": [
|
||||
{
|
||||
"container_name": "nextcloud-aio-languagetool",
|
||||
"display_name": "LanguageTool for Collabora",
|
||||
"documentation": "https://github.com/nextcloud/all-in-one/tree/main/community-containers/languagetool",
|
||||
"image": "erikvl87/languagetool",
|
||||
"image_tag": "latest",
|
||||
"internal_port": "8010",
|
||||
"restart": "unless-stopped",
|
||||
"environment": [
|
||||
"TZ=%TIMEZONE%"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
13
community-containers/languagetool/readme.md
Normal file
13
community-containers/languagetool/readme.md
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
## LanguageTool for Collabora
|
||||
This container bundles a LanguageTool for Collabora which adds spell checking functionality to Collabora.
|
||||
|
||||
### Notes
|
||||
- Make sure to have collabora enabled via the AIO interface
|
||||
- After adding this container via the AIO Interface, while all containers are still stopped, you need to scroll down to the `Additional Collabora options` section and enter `--o:languagetool.enabled=true --o:languagetool.base_url=http://nextcloud-aio-languagetool:8010/v2`.
|
||||
- See https://github.com/nextcloud/all-in-one/tree/main/community-containers#community-containers how to add it to the AIO stack
|
||||
|
||||
### Repository
|
||||
https://github.com/Erikvl87/docker-languagetool
|
||||
|
||||
### Maintainer
|
||||
https://github.com/szaimen
|
||||
|
|
@ -27,7 +27,7 @@
|
|||
"LLDAP_JWT_SECRET",
|
||||
"LLDAP_LDAP_USER_PASS"
|
||||
],
|
||||
"ui_secret": "LLDAP_JWT_SECRET",
|
||||
"ui_secret": "LLDAP_LDAP_USER_PASS",
|
||||
"volumes": [
|
||||
{
|
||||
"source": "nextcloud_aio_lldap",
|
||||
|
|
|
|||
|
|
@ -18,10 +18,7 @@ Functionality with this configuration:
|
|||
|
||||
> For simplicity, this configuration is done via the command line (don't worry, it's very simple).
|
||||
|
||||
First, you need to retrieve the LLDAP admin password, this will be used later on. Which you need to type in or copy and paste:
|
||||
```bash
|
||||
sudo docker inspect nextcloud-aio-lldap | grep LLDAP_LDAP_USER_PASS
|
||||
```
|
||||
First, you need to retrieve the LLDAP admin password that you can see next to the container in the AIO interface. There you can configure smtp first and then invite users via mail.
|
||||
|
||||
Now go into the Nextcloud container:<br>
|
||||
**Please note:** If you do not have CLI access to the server, you can now run docker commands via a web session by using this community container: https://github.com/nextcloud/all-in-one/tree/main/community-containers/container-management. This script below can be run from inside the container-management container via `bash /lldap.sh`.
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ This container bundles MakeMKV and auto-configures it for you.
|
|||
- This container should only be run in home networks
|
||||
- ⚠️ This container mounts all devices from the host inside the container in order to be able to access the external DVD/Blu-ray drives which is a security issue. However no better solution was found for the time being.
|
||||
- This container only works on Linux and not on Docker-Desktop.
|
||||
- This container requires the [`NEXTCLOUD_MOUNT` variable in AIO to be set](https://github.com/nextcloud/all-in-one?tab=readme-ov-file#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host). Otherwise the output will not be saved correctly..
|
||||
- This container requires the [`NEXTCLOUD_MOUNT` variable in AIO to be set](https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host). Otherwise the output will not be saved correctly..
|
||||
- After adding and starting the container, you need to visit `https://internal.ip.of.server:5802` in order to log in with the `makemkv` user and the password that you can see next to the container in the AIO interface. (The web page uses a self-signed certificate, so you need to accept the warning).
|
||||
- After the first login, you can adjust the `/output` directory in the MakeMKV settings to a subdirectory of the root of your chosen `NEXTCLOUD_MOUNT`. (by default `NEXTCLOUD_MOUNT` is mounted to `/output` inside the container. Thus all data is written to the root of it)
|
||||
- The configured `NEXTCLOUD_DATADIR` is getting mounted to `/storage` inside the container.
|
||||
|
|
|
|||
41
community-containers/minio/minio.json
Normal file
41
community-containers/minio/minio.json
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
{
|
||||
"aio_services_v1": [
|
||||
{
|
||||
"container_name": "nextcloud-aio-minio",
|
||||
"image_tag": "v2",
|
||||
"display_name": "Minio S3 Storage",
|
||||
"documentation": "https://github.com/nextcloud/all-in-one/tree/main/community-containers/minio",
|
||||
"image": "ghcr.io/szaimen/aio-minio",
|
||||
"internal_port": "9000",
|
||||
"environment": [
|
||||
"MINIO_ROOT_USER=nextcloud",
|
||||
"MINIO_ROOT_PASSWORD=%MINIO_ROOT_PASSWORD%"
|
||||
],
|
||||
"secrets": [
|
||||
"MINIO_ROOT_PASSWORD"
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"source": "nextcloud_aio_minio",
|
||||
"destination": "/data",
|
||||
"writeable": true
|
||||
}
|
||||
],
|
||||
"backup_volumes": [
|
||||
"nextcloud_aio_minio"
|
||||
],
|
||||
"nextcloud_exec_commands": [
|
||||
"php /var/www/html/occ config:system:set objectstore class --value 'OC\\Files\\ObjectStore\\S3'",
|
||||
"php /var/www/html/occ config:system:set objectstore arguments autocreate --value true --type bool",
|
||||
"php /var/www/html/occ config:system:set objectstore arguments use_path_style --value true --type bool",
|
||||
"php /var/www/html/occ config:system:set objectstore arguments use_ssl --value false --type bool",
|
||||
"php /var/www/html/occ config:system:set objectstore arguments region --value ''",
|
||||
"php /var/www/html/occ config:system:set objectstore arguments bucket --value nextcloud",
|
||||
"php /var/www/html/occ config:system:set objectstore arguments key --value nextcloud",
|
||||
"php /var/www/html/occ config:system:set objectstore arguments secret --value %MINIO_ROOT_PASSWORD%",
|
||||
"php /var/www/html/occ config:system:set objectstore arguments port --value 9000",
|
||||
"php /var/www/html/occ config:system:set objectstore arguments hostname --value nextcloud-aio-minio"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
18
community-containers/minio/readme.md
Normal file
18
community-containers/minio/readme.md
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
## Minio
|
||||
This container bundles minio s3 storage and auto-configures it for you.
|
||||
|
||||
>[!WARNING]
|
||||
> Enabling this container will remove access to all the files formerly written to the data directory.
|
||||
> So only enable this on a clean instance directly after installing AIO.
|
||||
> All additional users that are added via Nextcloud afterwards are going to work correctly.
|
||||
> Also, after enabling and using it, make sure to not disable the container as you cannot migrate from s3 to local storage anymore and s3 is a critical part of your infrastructure from then on.
|
||||
|
||||
### Notes
|
||||
- The data of Minio will be automatically included in AIOs backup solution!
|
||||
- See https://github.com/nextcloud/all-in-one/tree/main/community-containers#community-containers how to add it to the AIO stack
|
||||
|
||||
### Repository
|
||||
https://github.com/szaimen/aio-minio
|
||||
|
||||
### Maintainer
|
||||
https://github.com/szaimen
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
{
|
||||
"aio_services_v1": [
|
||||
{
|
||||
"container_name": "nextcloud-aio-nextcloud-exporter",
|
||||
"display_name": "Prometheus Nextcloud Exporter",
|
||||
"documentation": "https://github.com/nextcloud/all-in-one/tree/main/community-containers/nextcloud-exporter",
|
||||
"image": "ghcr.io/xperimental/nextcloud-exporter",
|
||||
"image_tag": "0.9.0",
|
||||
"internal_port": "9205",
|
||||
"restart": "unless-stopped",
|
||||
"ports": [
|
||||
{
|
||||
"ip_binding": "127.0.0.1",
|
||||
"port_number": "9205",
|
||||
"protocol": "tcp"
|
||||
}
|
||||
],
|
||||
"environment": [
|
||||
"TZ=%TIMEZONE%",
|
||||
"NEXTCLOUD_SERVER=https://%NC_DOMAIN%",
|
||||
"NEXTCLOUD_AUTH_TOKEN=%NEXTCLOUD_EXPORTER_TOKEN%",
|
||||
"NEXTCLOUD_LISTEN_ADDRESS=0.0.0.0:9205",
|
||||
"NEXTCLOUD_TIMEOUT=5s"
|
||||
],
|
||||
"ui_secret": "NEXTCLOUD_EXPORTER_CADDY_PASSWORD",
|
||||
"secrets": [
|
||||
"NEXTCLOUD_EXPORTER_TOKEN",
|
||||
"NEXTCLOUD_EXPORTER_CADDY_PASSWORD"
|
||||
],
|
||||
"nextcloud_exec_commands": [
|
||||
"php /var/www/html/occ config:app:set serverinfo token --value %NEXTCLOUD_EXPORTER_TOKEN%"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
72
community-containers/nextcloud-exporter/readme.md
Normal file
72
community-containers/nextcloud-exporter/readme.md
Normal file
|
|
@ -0,0 +1,72 @@
|
|||
## Prometheus Nextcloud Exporter
|
||||
|
||||
A Prometheus exporter that collects metrics from your Nextcloud instance for monitoring and alerting.
|
||||
|
||||
### How to install
|
||||
|
||||
See the [Community Containers documentation](https://github.com/nextcloud/all-in-one/tree/main/community-containers#community-containers) for instructions on how to install this in your Nextcloud All-in-One setup.
|
||||
|
||||
### Security & Access
|
||||
|
||||
**Important:** This container is configured to bind only to `127.0.0.1` (localhost) for security reasons. Prometheus exporters typically don't include authentication, so direct network exposure is not recommended.
|
||||
|
||||
#### Access Options
|
||||
|
||||
1. **With Caddy Container (Recommended)**: If you also install the [Caddy community container](https://github.com/nextcloud/all-in-one/tree/main/community-containers/caddy), it will automatically configure secure HTTPS access to your metrics with authentication at `metrics.your-domain.com`
|
||||
|
||||
**Getting Authentication Credentials**:
|
||||
- **Username**: Always `metrics`
|
||||
- **Password**: After deploying the nextcloud-exporter container, the automatically generated password will be displayed in the AIO interface. Look for it in the container section below the container name "Prometheus Nextcloud Exporter".
|
||||
|
||||
2. **Custom Reverse Proxy**: Set up your own reverse proxy (nginx, Apache, etc.) to provide HTTPS and authentication. See configuration guides:
|
||||
- [NGINX Authentication](https://nginx.org/en/docs/http/ngx_http_auth_basic_module.html) + [Reverse Proxy](https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/)
|
||||
- [Apache Authentication](https://httpd.apache.org/docs/2.4/howto/auth.html) + [Reverse Proxy](https://httpd.apache.org/docs/2.4/mod/mod_proxy.html)
|
||||
- [Traefik BasicAuth](https://doc.traefik.io/traefik/middlewares/http/basicauth/)
|
||||
- [Prometheus Security Best Practices](https://prometheus.io/docs/operating/security/)
|
||||
|
||||
3. **Direct Local Access**: Access metrics directly from the server at `http://127.0.0.1:9205/metrics` (no authentication)
|
||||
|
||||
### What it monitors
|
||||
- User activity (active users hourly, daily)
|
||||
- File counts and storage usage
|
||||
- System health and database size
|
||||
- App statistics and update availability
|
||||
- Nextcloud performance metrics
|
||||
|
||||
### Prometheus Configuration
|
||||
|
||||
For **local server access** (if Prometheus runs on the same server):
|
||||
```yaml
|
||||
scrape_configs:
|
||||
- job_name: 'nextcloud'
|
||||
scrape_interval: 90s
|
||||
static_configs:
|
||||
- targets: ['127.0.0.1:9205']
|
||||
metrics_path: /metrics
|
||||
scheme: http
|
||||
```
|
||||
|
||||
For **Caddy integration** (secure external access):
|
||||
```yaml
|
||||
scrape_configs:
|
||||
- job_name: 'nextcloud'
|
||||
scrape_interval: 90s
|
||||
static_configs:
|
||||
- targets: ['metrics.your-domain.com']
|
||||
metrics_path: /
|
||||
scheme: https
|
||||
basic_auth:
|
||||
username: 'metrics'
|
||||
password: 'your-generated-password'
|
||||
```
|
||||
|
||||
### Visualization
|
||||
|
||||
Compatible with Grafana for creating monitoring dashboards:
|
||||
- Pre-built dashboard available: [Grafana Dashboard #20716](https://grafana.com/grafana/dashboards/20716-nextcloud/)
|
||||
|
||||
### Repository
|
||||
https://github.com/xperimental/nextcloud-exporter
|
||||
|
||||
### Maintainer
|
||||
https://github.com/grotax
|
||||
23
community-containers/notifications/notifications.json
Normal file
23
community-containers/notifications/notifications.json
Normal file
|
|
@ -0,0 +1,23 @@
|
|||
{
|
||||
"aio_services_v1": [
|
||||
{
|
||||
"container_name": "nextcloud-aio-notifications",
|
||||
"display_name": "Notifications",
|
||||
"documentation": "https://github.com/nextcloud/all-in-one/tree/main/community-containers/notifications",
|
||||
"image": "ghcr.io/szaimen/aio-notifications",
|
||||
"image_tag": "v1",
|
||||
"internal_port": "10000",
|
||||
"restart": "unless-stopped",
|
||||
"volumes": [
|
||||
{
|
||||
"source": "%WATCHTOWER_DOCKER_SOCKET_PATH%",
|
||||
"destination": "/var/run/docker.sock",
|
||||
"writeable": false
|
||||
}
|
||||
],
|
||||
"environment": [
|
||||
"TZ=%TIMEZONE%"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
12
community-containers/notifications/readme.md
Normal file
12
community-containers/notifications/readme.md
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
## Notifications
|
||||
This container allows other AIO community containers to send admin notifications to Nextcloud users.
|
||||
|
||||
### Notes
|
||||
- It needs to be enabled for the [scrutiny container](https://github.com/nextcloud/all-in-one/tree/main/community-containers/scrutiny) for example to make use of admin notifications that are sent if a smartctl failure was found.
|
||||
- See https://github.com/nextcloud/all-in-one/tree/main/community-containers#community-containers how to add it to the AIO stack
|
||||
|
||||
### Repository
|
||||
https://github.com/szaimen/aio-notifications
|
||||
|
||||
### Maintainer
|
||||
https://github.com/szaimen
|
||||
|
|
@ -6,7 +6,7 @@ This container bundles Scrutiny which is a frontend for SMART stats and auto-con
|
|||
- ⚠️ This container mounts all devices from the host inside the container in order to be able to access the drives and smartctl stats which is a security issue. However no better solution was found for the time being.
|
||||
- This container only works on Linux and not on Docker-Desktop.
|
||||
- After adding and starting the container, you need to visit `http://internal.ip.of.server:8000` which will show the dashboard for your drives.
|
||||
- It currently does not support sending notifications as no good solution was found yet that makes this possible. See https://github.com/szaimen/aio-scrutiny/issues/3
|
||||
- It supports sending notifications in case of a smartctl failure if you enable the notifications community container: https://github.com/nextcloud/all-in-one/tree/main/community-containers/notifications
|
||||
- See https://github.com/nextcloud/all-in-one/tree/main/community-containers#community-containers how to add it to the AIO stack
|
||||
|
||||
### Repository
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
"display_name": "Scrutiny",
|
||||
"documentation": "https://github.com/nextcloud/all-in-one/tree/main/community-containers/scrutiny",
|
||||
"image": "ghcr.io/szaimen/aio-scrutiny",
|
||||
"image_tag": "v1",
|
||||
"image_tag": "v2",
|
||||
"internal_port": "8000",
|
||||
"init": false,
|
||||
"restart": "unless-stopped",
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ This container bundles an SMB-server and allows to configure it via a graphical
|
|||
|
||||
### Notes
|
||||
- This container should only be run in home networks
|
||||
- This container currently only works on amd64. See https://github.com/szaimen/aio-smbserver/issues/3
|
||||
- After adding and starting the container, you need to visit `https://internal.ip.of.server:5803` in order to log in with the `smbserver` user and the password that you can see next to the container in the AIO interface. (The web page uses a self-signed certificate, so you need to accept the warning). Then type in `bash /smbserver.sh` and you will see a graphical UI for configuring the smb-server interactively.
|
||||
- The config data of SMB-server will be automatically included in AIOs backup solution!
|
||||
- See https://github.com/nextcloud/all-in-one/tree/main/community-containers#community-containers how to add it to the AIO stack
|
||||
|
|
|
|||
|
|
@ -48,7 +48,8 @@
|
|||
"environment": [
|
||||
"TZ=%TIMEZONE%",
|
||||
"NC_DOMAIN=%NC_DOMAIN%",
|
||||
"STALWART_USER_PASS=%STALWART_USER_PASS%"
|
||||
"STALWART_USER_PASS=%STALWART_USER_PASS%",
|
||||
"CLAMAV_ENABLED=%CLAMAV_ENABLED%"
|
||||
],
|
||||
"secrets": [
|
||||
"STALWART_USER_PASS"
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue