ERROR: default.html.twig template not found for page: /blog/how-to-deploy-containers-on-a-qnap-nas

How to deploy containers on a QNAP NAS

I was recently asked what I use my NAS for. Let me explain why it might become an essential piece for your home IT infrastructure. I recently bought a QNAP NAS (TP-231P3) to solve the issue of having multiple computers but no central place to save the data. The time machine backups run silently in the background without the need to plug in disks. Libraries like movies, picutres, music and book are centrally accessible. You also have a central source for extensions of your smart home. To put it in a nutshell, a NAS can be one central building block of your personal AI assistance . A typical and cheap solution for all this needs is to buy a raspberry pi. By default it uses the sd-card for storage which is not durable and has slow read and write performance. You can use an external drive but all in all it is much more confortable to use an existing system. I also thought that the option to have 2.5 Gbit/s ethernet would be future proof and I was about to lay down cables in my new flat anyways. First I thought that changing the cat 5e ethernet cables (max 1 Gbit/s) to cat 6 is enough. Turns out: 2.5 GB/s needs more power and also dedicated hardware which is quite expensive (cheapest options ca. 109 €, cf. 1 GB/s for 19 €). The unexpected thing is that wifi is potentially faster than Ethernet allowing 1.3GB/s with 802.11ac ("wifi 5"). Unless you really need the speed or plug your computer directly to the NAS save the money.

This mid-level model allows to run containers. Neat. So I can quickly deploy any software which has been put by others or me in an image. Docker is often advertised as a software which solves the problem of the developer saying 'works on my machine' but on another persons machine it does not work. Unfortunately this holds only for machines using the same instruction set. Docker is not a virtualization tool. Code runs directly on the kernel (on mac it needs a VM to emulate linux).

Unfortunately this comes with a catch. While arm processors a heavily used in mobile hones they are now used in server, desktop computers and other computers like NAS. The NAS is using an arm32 processor so images built on an intel computer won't run.

Here is a guide how to run your own containers.

1. Install "container station" on your NAS.

Bildschirmfoto%202021-11-22%20um%2013.51.13

I tried loading the built image directly using the import tool. However it failed and there were not helpful error messages.

2. Running a hello world

I tried running the hello world image by importing it. I had previously loaded it on my machine and exported it using docker save hello-world -o helloworld.tar The import completes but when you then create a container from it, it fails with "exec format error". The same issue happened when you download it via the image browser. There is a platform specific image called "arm32v7/hello-world". However it does not appear when you search for that. By using docker compose with anystring: image: "arm32v7/hello-world"

and giving it a title without dashes you can run it. Bildschirmfoto%202021-11-22%20um%2013.34.22

3. Build your own image for ARM32

In my example I use the linux distribution ubuntu as the base image using the arm32 build.

As soon as you run the dockerfile if fails when executing RUN steps. Which was solved with sudo apt-get install qemu-user-static

This will make it run on an reactOs virtualization. Because of the virtualization compiling is super slow and it is only using one CPU core.

You can then use buildx, dockers experimental build tool to build for a different or even multiple platforms in one image. The software I built is based on ubuntu as the base image. The official ubuntu base image is not a multi-platform image, so I specified a different base image for arm32v7 in a second dockerfile. So now I have different dockerfiles (here Dockerfile_arm) which makes the command to build and deploy to dockerhub result in this: sudo docker buildx build --platform linux/arm32v7 -f Dockerfile_arm -t bsvogler/molovol:latest --push .

For my image I needed the wxwidget installed. AFAIK there are no wxwidget binaries for linux, so you have to compile them yourself in the image. Literally 2.1 hours later.... It needs the rust compiler to use python poetry. In the end I succeeded in running this a custom service on the qnap.

tl;dr: If the image supports arm32, and most popular images should support it, it is rather easy to deploy services at home. If not, it is still possible but you need some more time for building your own images.