Migrating Nextcloud from my Synology DiskStation to a dedicated Docker server

Connecting cables

My Synology DiskStation has been running Nextcloud for years. It worked, but it never felt right. The DiskStation is a NAS, reliable, power-efficient, and excellent at storing and serving files. Running a full Nextcloud stack on top of that, with MariaDB, Redis, a cron container, and OnlyOffice, was always a bit of a stretch.

Performance was the main driver. The DiskStation’s CPU would visibly struggle during peak usage. Nextcloud felt sluggish in a way that a proper server shouldn’t.

The second reason was separation of concerns. A NAS should be a NAS. An application server should be an application server. Mixing both on the same device means you’re always making compromises, and when something goes wrong it’s harder to diagnose.

When I built my server, I called it Nexus. It’s a Minisforum mini PC with a Ryzen 9 PRO 8945HS and it acts as a dedicated home server. After moving Emby and Matomo to it, moving Nextcloud was an obvious next step. The machine has 16GB DDR5, a 1TB NVMe SSD, and runs Ubuntu 24.04 LTS. More than enough for a personal Nextcloud instance.

Crucially, the media and documents stay on the DiskStation. The NAS still does its job. Nexus mounts the shares via NFS and Nextcloud accesses them as external storage. Best of both worlds.

What the source setup looked like

On the DiskStation, Nextcloud ran as a Docker stack with four containers:

  • Nextcloud: the main Nextcloud container (Apache + PHP)
  • Nextcloud-CRON: a dedicated cron container sharing the same volumes
  • Nextcloud-REDIS: Redis for session handling and distributed cache
  • Nextcloud-MariaDB: A MariaDB 11.4 container for the database

Version: Nextcloud 33.0.0, the latest release at the time of migration.

The data was spread across several mounted directories on the DiskStation:

PathContents
/volume1/docker/nextcloud/htmlNextcloud app files
/volume1/docker/nextcloud/dataUser data
/volume1/docker/nextcloud/configConfiguration
/volume1/docker/nextcloud/custom_appsCustom apps
/volume1/docker/nextcloud/themesThemes
/volume1/docker/nextcloud/php/Custom PHP config files

Additionally, three external storage mounts were configured inside Nextcloud pointing to NAS shares(images, video, and documents). These didn’t need migrating since the NAS shares are accessible from nexus.home via NFS.

The migration process

Step 1: Inventory and preparation

Before touching anything, I mapped out exactly what needed to move and what didn’t. The key insight: the 57GB data/ directory sounds large, but it’s mostly Nextcloud’s internal file cache and thumbnails. The actual user files (photos, documents, videos) live on the NAS and are accessed via external storage mounts, so no need to copy those.

Step 2: Enable maintenance mode

Always start here. This prevents any writes to the database or filesystem during the migration:

docker exec -u www-data Nextcloud php occ maintenance:mode --on

Step 3: Database dump

docker exec Nextcloud-MariaDB mariadb-dump -u nextcloud -pPASSWORD nextcloud > nextcloud-backup.sql

The dump came out at 212MB for my instance.

Step 4: Copy the files

I transferred everything to Nexus via sftp from the Diskstation. The transfer covered html, data, config, custom_apps, themes, and the PHP config files. About 58GB in total.

Step 5: Prepare the target

On Nexus, I created the folder structure and wrote the docker-compose.yml to match the DiskStation setup as closely as possible, same image versions, same volume structure, same Redis and cron containers.

The NFS shares were already mounted on nexus.home (for Emby), so the external storage paths were available.

Step 6: Import the database

Start just the database container first, wait for it to initialise, then import:

docker compose up -d nextcloud-db
# Wait ~30 seconds
docker exec -i nextcloud-db mariadb -u nextcloud -pPASSWORD nextcloud < nextcloud-backup.sql

After this, 202 tables were imported successfully.

Step 7: Fix permissions and start

sudo chown -R www-data:www-data /opt/docker/nextcloud/{html,data,config,custom_apps,themes}
docker compose up -d

Things that went wrong

No migration goes perfectly. Here’s what I ran into.

Wrong database host in config.php

The migrated config.php still had the old container name (Nextcloud-MariaDB) as the database host. On the DiskStation, that was the container name. On Nexus, the new container is called nextcloud-db. A quick edit to config.php fixed it.

The login loop mystery

After starting everything up, I could reach the Nextcloud login page on the Nexus IP and port, but logging in just redirected back to the login page in an endless loop. No error message, nothing useful in the logs at first glance.

The fix was found by opening the browser’s developer console. There it was:

Cookie "oc_sessionPassphrase" has been rejected because a non-HTTPS 
cookie can't be set as "secure".

The original Nextcloud instance ran behind an HTTPS reverse proxy. The session cookies were set with the secure flag, meaning browsers only accept them over HTTPS. On the local network over plain HTTP, every cookie was silently rejected, making login impossible. The fix was simple once identified.

I edited the DiskStation Reverse Proxy so it forwards requests to the Nexus, and I could use the domain name to log in.

Lesson learned: always check the browser console when a web app behaves mysteriously.

External storage paths

After logging in, the images, video, and documents folders were missing from the file browser. The external storage mounts were configured with the old DiskStation paths (/media/mnt/foldername/) which don’t exist on Nexus.

Update them via occ:

docker exec -u www-data nextcloud php occ files_external:config 1 datadir /mnt/images
docker exec -u www-data nextcloud php occ files_external:config 2 datadir /mnt/documents
docker exec -u www-data nextcloud php occ files_external:config 3 datadir /mnt/video

These paths are the container-internal mount points (not the host NFS paths), so make sure your docker-compose.yml volume mounts match.

OnlyOffice

OnlyOffice runs as a container and integrates with Nextcloud for document editing. Rather than maintaining a separate docker-compose.yml for it, I added it directly to the Nextcloud stack, which automatically puts it on the same Docker network and makes container name resolution work out of the box.

Here’s my OnlyOffice service definition in the Nextcloud docker-compose.yml:

onlyoffice:
  image: onlyoffice/documentserver:latest
  container_name: onlyoffice
  restart: always
  ports:
    - '8083:80'
  extra_hosts:
    - "cloud.mydomain.com:192.168.1.14"
  environment:
    - JWT_ENABLED=false
    - JWT_SECRET=yourjwtsecret
    - JWT_HEADER=Authorization
    - ALLOW_PRIVATE_IP_ADDRESS=true
    - USE_ASSETS_ADDR=true
  volumes:
    - ./onlyoffice_data:/var/www/onlyoffice/Data
    - ./onlyoffice_logs:/var/log/onlyoffice
    - ./onlyoffice_fonts:/usr/share/fonts/truetype/custom
  depends_on:
    - nextcloud

A few things worth explaining here:

extra_hosts adds a static host entry inside the container so that cloud.mydomain.com resolves to the local IP of Nexus rather than going out through the internet. This is needed because OnlyOffice needs to reach Nextcloud to download documents for conversion, and hairpin NAT (going out to the internet and back in) is unreliable for this.

ALLOW_PRIVATE_IP_ADDRESS=true and USE_ASSETS_ADDR=true allow OnlyOffice to communicate with private IP addresses which is necessary for the internal container-to-container traffic.

JWT is disabled. After extensive debugging, matching secrets, checking headers, verifying local.json, I ended up disabling JWT entirely. For a home setup where OnlyOffice is not publicly exposed (it sits behind a reverse proxy with HTTPS on the outside), JWT on the internal communication is unnecessary overhead. So I set JWT_ENABLED=false and moved on.

After starting the stack, set the internal URLs in Nextcloud so traffic routes through the container network rather than the internet:

docker exec -u www-data nextcloud php occ config:app:set onlyoffice \
  DocumentServerInternalUrl --value="http://onlyoffice/"
docker exec -u www-data nextcloud php occ config:app:set onlyoffice \
  StorageUrl --value="http://nextcloud/"
docker exec -u www-data nextcloud php occ config:system:set \
  allow_local_remote_servers --value=1 --type=integer

The result

Nextcloud is running smoothly on Nexus. The DiskStation is back to being a NAS. Performance is noticeably better. The Ryzen 9 8945HS handles Nextcloud without any visible effort, and the NVMe SSD makes the database operations snappy.

Key takeaways

  • Keep your NAS as a NAS. Mount the shares on your application server instead of running everything on the same device.
  • Check the browser console when a web app misbehaves. The cookie rejection messages were the key to solving the login loop.
  • config.php needs updating after every migration. Database host, trusted domains, overwrite protocol, external storage paths.
  • Internal Docker networking matters. Put containers that need to communicate on the same network and use container names, not external URLs.
  • JWT between OnlyOffice and Nextcloud is tricky. For a home setup behind a reverse proxy and firewall, disabling it is a reasonable choice.

Share on Mastodon

About Marcel Bootsman

Marcel discovered the web in 1995. Since then he has paid attention to and worked with lots of technologies and founded his own WordPress oriented business nostromo.nl in 2009.

Currently Marcel is Partnerships & Community Manager EMEA at Kinsta. where he helps clients and partners grow with their business with Managed Hosting for WordPress.

You can contact Marcel on a diverse range of online platforms. Please see the Connect section on the homepage for the details.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *