Search by Tags

TorizonCore Build Environment


Article updated at 29 Jul 2019
Compare with Revision

Subscribe for this article updates


Yocto / OpenEmbedded is often recognized as difficult to setup, due to its dependencies being very specific according to the Linux distro you are using.

Docker is a technology widely used to solve dependency issues and thus it is a very good fit to solve OpenEmbedded dependencies. Of course someone has already thought about that and, even better, it was adopted as a software component under the umbrella of the Yocto Project itself.

The TorizonCore Build Environment is a project that extends CROPS and documents in this article how to build a Toradex Embedded Linux image, focusing on TorizonCore images. The project is available on Github.

This article complies to the Typographic Conventions for Torizon Documentation.


The following prerequisites are mandatory:

  • Linux / Windows host with Docker installed.

The following prerequisites are recommended, though optional:

The current article does not replace, but rather extends the following ones. Carefully going through them is recommended:

Note: the instructions provided have been tested on Linux and Windows hosts. Things are expected to work on Mac OS but specific instructions are not provided.

Basic Build Configuration

Since Docker allows for great flexibility, as well as the Torizon/CROPS image used for the Yocto build, this article has a basic section where you are guided through building the Torizon image, and an advanced section, which in fact is just a compendium of additional or alternative configuration options.


A few steps are required before you are ready to stat a build. You need to execute those steps once every time you want to start a new build, but if you want to modify, access or continue an existing build those can be skipped.

On Windows

Open a Power Shell and create a Docker volume. Change the UID/GID to your user instead of root user, as it's required by the Torizon/CROPS container:

$ docker volume create yocto-workdir
$ docker run -it --rm -v yocto-workdir:/workdir busybox chown -R 1000:1000 /workdir

On Linux

Open a shell using a terminal emulator and create a directory to be shared with Torizon/CROPS container. Create the directory as user, don't use sudo:

$ cd ${HOME}
$ mkdir ${HOME}/yocto-workdir


You will most likely want to make changes to the default image, otherwise you would be using the pre-built release provided by Toradex. It is convenient that you have access from your host OS instead of inside a container, because then you can use your favorite IDE, text editor, graphical tools, etc.

On Windows

Since the filesystem / container is not a native Windows container, it is not possible to use the bind-mount method for accessing the build contents. The workaround proposed by the CROPS project is to serve the build directory using a Samba mount.

Open another Power Shell window and type:

$ docker run --rm -d -it --net host --name samba -v yocto-workdir:/workdir crops/samba

Now you can close this Power Shell window, since Samba container is running in the background. Open the File Explorer and access your workdir from local network. Use the address below:


On Linux

Since you are sharing a local directory with the container, you can access the files inside it right away. Just open a new terminal or the file explorer of your choice and go to the previously created ${HOME}/yocto-workdir.


Now you are ready to start a build. You need to know for which Toradex SoM you are building and use the correct machine value in the next steps. Please consult the respective Torizon or regular BSP articles to get the correct machine value:

For instance, if you have a Colibri iMX6 you'll use MACHINE=colibri-imx6 in the next steps whenever required.

You will also have to decide what target image you want to build. See the resources below for some options:

For instance, if you want to build the default Torizon image with Docker runtime you'll use TARGET=torizon-core-docker in the next steps.

Follow the steps according to your host OS, machine and (optionally) target image.

On Windows

Start the Torizon/CROPS container using the recently created volume as working directory. Give the container a name, you might use it later. In our example the name is crops:

$ docker run --rm -it --name=crops -v yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=<machine> TARGET=<target>"
$ docker run --rm -it --name=crops -v yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=colibri-imx6 TARGET=torizon-core-docker"

On Linux

Start the Torizon/CROPS container using the recently created directory as working directory. Give the container a name, you might use it later. In our example the name is crops:

$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=<machine> TARGET=<target>"
$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=colibri-imx6 TARGET=torizon-core-docker"

Once the build finishes, you will see a prompt similar to the one presented below, which means you are inside a container shell. It is an environment ready for Yocto builds where you can run or re-run builds, build single components and pretty much anything related to Yocto.



Once the build is done, there are two ways to flash the image to the SoM:

  • Bring up a container that serves the image over HTTP and install over Ethernet.
  • Copy the image to your host machine and use an SD card or USB flash drive.

Serve Over HTTP

Open a new shell to bring-up a trivial plain HTTP file server using a Python container. Notice that this server is a convenience for a quick and dirty test, but is not meant for a production environment:

$ docker run --rm -it -v yocto-workdir:/workdir --expose 8080 -p 8080:80 python:3.7-alpine python3 -m http.server --directory=/workdir/torizon/build-torizon/ 80

Find the IP of your computer. Hint: on Power Shell you can type ipconfig and on Linux shell you can type ip addr.

On the graphic interface of Toradex Easy Installer running on the board, add your local server to the feeds:

http://<your PC IP>:8080/image_list.json

Once the list of images is refreshed, you will see your image ready to be installed on the SoM.

Use an SD Card or USB Stick

You can access the final image either from the Samba mount or your bind-mount directory. Decompress it into an SD card or USB flash drive, insert it into the board and follow instructions how to install an image using the Toradex Easy Installer.

If for some reason you don't have the Samba container running on Windows, or have chosen to use a volume on Linux instead of a bind-mount, you can copy the compressed Toradex Easy Installer to your host machine using docker cp. See the instructions in the collapsible section below:

Copy image from Docker container to your computer

Advanced Build Configuration

This section goes into more detail about using the Torizon/CROPS container.


This section talks about how to keep different mount points for different parts of the Yocto setup:

Mount Downloads, Deploy and sstate-cache in Separate Directories or Volumes

The basic setup uses a single volume or bind-mount for your entire workdir. You could, for instance, have separate ones for downloads, sstate-cache and the deploy directory that could be shared with other builds or backed-up in case you want to wipe the rest of your workdir:

$ mkdir ${HOME}/yocto-downloads
$ mkdir ${HOME}/yocto-sstate-cache
$ mkdir ${HOME}/yocto-deploy

You would need to mount those inside the Torizon/CROPS container, in the root directory of the Yocto setup. For Torizon it would be /workdir/torizon/:

$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir -v ${HOME}/yocto-downloads:/workdir/torizon/downloads -v ${HOME}/yocto-sstate-cache:/workdir/torizon/sstate-cache -v ${HOME}/yocto-deploy:/workdir/torizon/build-torizon/deploy torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=<machine> TARGET=<target>"


This section has some remarks about how to modify the Yocto setup.

Use Vim Inside the Build Container

The Torizon/CROPS container includes the Vim tiny editor, which you can use for quick tweaks without having to use external tools. Vim is known as hard for first steps. There are several tutorials, including an interactive tutorial and a game. For instance, to quickly edit the conf/local.conf:

$$ vim.tiny conf/local.conf

Use a Helper Container to Make Edits

In addition, you could as well bring up any helper container to edit files, such as busybox:

$ docker run --rm -it -v torizon-workdir:/workdir busybox sh

But then things start to become much easier if you use your host machine as described before in the basics section.

Automatically Bring Up the Samba Container

If you want to automatically bring up the Samba container running as a daemon, a few flags must be changed:

$ docker run -d -it --restart always --net host --name samba -v yocto-workdir:/workdir crops/samba


This section presents some different ways to do things using the Torizon/CROPS container.

Only Configure But Don't Build

You can just bring up the Torizon/CROPS container without starting a build, but having everything configured for building a Torizon image. Just omit the TARGET variable:

$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=<machine>"

Only Bring Up But Don't Configure

You can just bring up the Torizon/CROPS container without having it configured for building a Torizon image:

$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir

You will then be in a shell inside the container where you can follow steps as described by any Yocto setup, for instance OpenEmbedded (core) or Build TorizonCore.

Multiple Build Directories

You can use the helper scripts from the Torizon/CROPS container to setup a build directory with a different name, but still sharing the setup (layers, downloads, build directory, etc). You just have to pass the build directory name after the helper script in --cmd. See the examples below, where the first is generic and the second uses build-torizon-colibri-imx6 as build directory:

$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=<machine> <build directory>"
$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=<machine> build-torizon-colibri-imx6"

Build the Poky-based Embedded Linux BSP

Throughout the article, the helper script was invoked when starting a Torizon/CROPS container to setup and/or build a TorizonCore image. For convenience, a script named is provided as well for building the Poky-based Embedded Linux BSP. You can simply replace one with the other in the instructions. An example is provided for reference:

Note: other things might change, such as the directory structure of the build, which might require tweaks in the examples given throughout the article.

$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=<machine> TARGET=<target>"
$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=colibri-imx7 TARGET=console-tdx-image"

Build for a Specific Release Version

It is also possible to build any other version of the Embedded Linux BSP, for instance the release 2.8 which is sometimes referred to as ångström-based Embedded Linux BSP:

$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=<machine> TARGET=<target> BRANCH=<repo manifest branch>"
$ docker run --rm -it --name=crops -v ${HOME}/yocto-workdir:/workdir torizon/crops-toradex --workdir=/workdir --cmd="MACHINE=colibri-imx7 TARGET=console-tdx-image BRANCH=LinuxImage2.8"


This section has comments on the install process described earlier.

HTTP Server Remarks

To install an image over the network all you need is a plain HTTP file server. It could be an Nginx, Apache, Node.js + Express or any other server or framework that serves files over HTTP. We have chosen the built-in Python simple HTTP server, which is a quick setup in a development environment.

The Torizon/CROPS container already setup a generic image_list.json file with all possible machines. It is located in /etc/image_list.json and copied to each build directory you create, in /workdir///image_list.json. You can modify it at your will.

You can find more information in the Toradex Easy Installer article, for instance in the section Unattended Flashing Over Ethernet.

Additional Resources

Cross Platform Enablement With Containers, or CROPS, was created by Randy Witt from Intel in c.a. October 2015 (according to initial commit on GitHub). For a better understanding of what CROPS is, check the talk Cross Platform Enablement for the Yocto Project with Containers from Randy Witt, Intel at Embedded Linux Conference 2017. The slides from the talk are available in the Yocto Project website.

The CROPS repository on GitHub has quite a few projects. For the scope of this article we identify a few relevant ones:

  • yocto-dockerfiles: the base for the other projects. It has dockerfiles for many different Linux distros and versions, and is not directly used by us, but rather indirectly from the CROPS project poky-container.
  • poky-container: this is the base container actually uses for building the Yocto images. Its base is the latest Ubuntu image supported by the Yocto Project, from yocto-dockerfiles.


On Windows host, the following have been observed:

  • Issue: sometimes it seems that the terminal becomes unresponsible during the build, while in fact things are still running appropriately.
    • Solution: hit Enter key a few times and have patience.
  • Issue: when building heavy packages such as Firefox, it was observed that the build failed due to lack of resources.
    • Solution: the issue was solved by re-building everything from scratch after re-configuring Docker with more CPU, RAM and swap resources.