Building Docker containers can be time-consuming, mostly when you have a container that downloads packages to set-up a build or execution environment for your applications.
This article gathers tips and tricks to make your life easier - and faster:
This article complies to the Typographic Conventions for Torizon Documentation.
If you run
apt-get or another package manager to download and install those packages and then change the package list, docker will re-run the whole command and perform again all the downloads, taking time and connection bandwidth. This issue may be mitigated by configuring a proxy for your docker containers. This proxy will cache download requests and avoid multiple downloads of the same packages.
Since you already have Docker installed on your machine it would be easy to run your proxy inside a container:
Create a local folder on your machine to store the configuration files and the cache for your proxy and enter that folder:
$ mkdir squid && cd squid
Create two sub-folders named "cfg" and "cache":
$ mkdir cfg cache
Download the squid container:
$ docker pull woahbase/alpine-squid:x86_64
Run it for the first time to populate the configuration folder:
$ docker run -it --rm -v $(pwd)/cfg:/etc/squid -v $(pwd):/var/cache/squid woahbase/alpine-squid:x86_64
This will print out some messages. When squid startup has been completed, press
Ctrl+C and the container will shutdown. You will notice that a file under cgf/squid.conf has been created and:
See a sample of the configuration for a proxy with:
It's a long file, click on the collapsible section to see it:
Run the proxy container:
$ docker run -d \ --restart always \ --name squid --hostname squid \ -c 256 -m 256m \ -e PGID=1000 -e PUID=1000 \ -p 3128:3128 -p 3129:3129 \ -v $(pwd)/cfg:/etc/squid \ -v $(pwd)/cache:/var/cache/squid \ -v /etc/hosts:/etc/hosts:ro \ -v /etc/localtime:/etc/localtime:ro \ woahbase/alpine-squid:x86_64
Create a local file on your PC under $HOME/.docker/config.yaml:
$ touch $HOME/.docker/config.yaml
Add your proxy IP address to the file. Don't use your machine name because it may not be resolved correctly inside a container:
From now on, you should notice that packages are downloaded only once and your builds will be much faster.
When creating a Node.js project you will most likely describe the project in a
package.json file, including the project's dependencies. Check it out how to:
npm installfrom being run every time you modify your source-code.
For this example, let's use the theoretical
package.json below. We plan to build a theoretical Express.js REST API + SQLite for storage:
Create a dependencies stage that installs the dependencies from the
package.json before the deploy stage:
Then just copy
node_modules to your final deploy stage:
Notice a few interesting points:
npm installcommands without having to figure out build dependencies, etc.
nodeinstead of root.
nodeis provided by default in the official Node.js Docker images.
COPY --chown=node:node, depending if your app will modify any of those files or directory contents.
Building native packages may take some time, especially if you opt to use Arm emulation (qemu) instead of cross-builds. To prevent those packages from building whenever you change something in your
package.json you can add an extra native-deps stage before dependencies.
This is a good idea for the package
sqlite3 from our example, because unless explicitly stated, it builds
libsqlite from source instead of linking against a system-installed version. Add the stage before dependencies:
And on dependencies copy the pre-built sqlite3 npm package before running
This is not exactly a tip on improving build speed, but I bet you are curious on how to do it.
In the native-deps stage use the arguments
--build-from-source --sqlite=/usr as described in its documentation. You don't need to install
apt because it's there by deafult in the full version of the Node.js Docker image:
In the deploy stage install libsqlite, since it's not available in the slim version: