Search by Tags

Torizon Best Practices Guide

 

Article updated at 04 Dec 2020
Subscribe for this article updates

Select the version of your OS from the tabs below. If you don't know the version you are using, run the command cat /etc/os-release or cat /etc/issue on the board.



Remember that you can always refer to the Torizon Documentation, there you can find a lot of relevant articles that might help you in the application development.

Torizon 5.0.0

Introduction

Torizon, as explained on the TorizonCore Technical Overview, provides the container runtime and Debian Containers for Torizon, among others listed on the List of Container Images for Torizon, to simplify the developer's life and keep the development model close to one of desktop or server applications. Even though we follow the desktop and server application standards as much as possible, some requirements are inherent to embedded systems development that are divergent. One typical example is hardware access.

Torizon also has its architecture-specific aspects of the system, like graphics and UI, that you should consider when developing your application or porting to an existing one. In this chapter, we will discuss some of those potential issues and how to handle it.

Development Environment

Developing applications for Torizon can be done using command-line tools or, as we recommend, the Visual Studio Extension For Torizon and Visual Studio Code Extension for Torizon. This article contains best practices that may apply in all situations using Torizon. Nevertheless, we focus on explaining how to do things on the Visual Studio Code Extension for Torizon.

This article complies to the Typographic Conventions for Torizon Documentation.

Running in a Container

Running your application in a container means that it will run in a "sandboxed" environment.

This fact may limit its interaction with other components in the system. See below some examples of things that may not work when your application is running inside a container:

  • Changing configuration settings
  • Storing data permanently
  • Other operations

There are solutions for many of those scenarios and, usually, those aren't too hard to apply. In this article, we'll discuss some of them, keeping in mind best practices.

Prerequisites

Storing Data Permanently

Containers are transient by nature. Since a container can be destroyed and re-created, storing data inside the container's filesystem is not a good idea. It is also impossible to share such data with other containers, making it unsafely stored for all the containers(It disappear after removing the container) and hard to manage. See how to overcome this in the next sub-sections.

Bind Mounts

You can mount a directory from the host filesystem inside the container to store permanently non-volatile data on the SoM's flash memory. This technique is known as bind mount.

You may need to change or configure your application to store data in that location (or mount the folder where the application expects it).
Also, consider that the files' UID/GID will be shared with the underlying host OS, so using coherent IDs may help. Debian Containers for Torizon use the same users/UIDs and groups/GIDs from the base OS, keeping file permissions always clear.

Volumes

You can also use docker volumes instead of providing an explicit path on the local filesystem. The runtime manages volumes created via the docker volume command or the docker-compose.

Bind Mounts and Volumes on VS Code

In the Visual Studio Code extension, you can add additional bind mounts and volumes:

  • Click the + icon next to the volumes list in the configurations view.
  • Insert the path of either:
    • The folder on the host filesystem.
    • Or the name of your docker volume.
  • Insert the path where you want it to be mounted inside the container.
    • By default, volumes are mounted as readwrite. If you want to mount it read-only, add ,ro after the container's path.
  • Adding Bind Mounts and Volumes

Connectivity

By default, containers connect to the Docker's bridge" network. This configuration allows containers to communicate with the outside with no restrictions. However, it prevents them from being accessible from the outside. We cover three possible networking configuration, each recommended for specific situations:

  • Expose ports: used when your application needs to receive an inbound connection from outside the container on a specific port or a set of ports. Attackers cannot access ports that are not explicitly exposed, and you can use the same port number on several containers, even if the host uses that port number for another application.
  • Private Networks: used when you need to communicate between containers, but avoid that external world to access those communication endpoints. For example, if you have a container exposing a REST API (backend) to a container that implements a web UI (frontend).
  • Host Network: Used when you need to access the network with the same IP and configuration used by processes running natively on the host OS. This method is the least recommended since you expose the entire container networking to the outside, and you should only choose it if it is really required.

Exposed Ports - Inbound Communication

It's possible to enable communication on specific ports using the ports setting. To expose a port, you have to:

  • Click on the + sign next to ports.
  • Add the port information in the format: <port number>/<tcp/udp>, for example 8080/tcp.
  • Add a matching port number that should be used on the host or leave it empty to let the runtime assign a free port number.
  • Exposing Ports

Private Networks - Inter-container Communication

There are scenarios where you may want your containers communicating only to each other on the same device. You can do it by creating a private docker network.
Containers on a private network are accessible with no restrictions, without needing to explicitly enable ports, but only if the containers are on the same network. This remark is important because you can create as many networks as you want and have one container on more than one private network.

For example, you may have a container exposing a REST API (backend) to a container that implements a web UI (frontend). The backend and frontend will be on the same private network. The frontend will also be on the bridge network, exposing the port used to serve webpages.

To use a private network, you have to create it on the device using the docker network create command or defining it in docker-compose. Then you can add your container to that network:

  • Press the + sign next to networks.
  • Type your network name:
    • If using docker network create <network>, type the <network> name
    • If using a docker-compose file in your settings, then you have to prepend #%application.id%#_ to the network name you set in the compose file.
  • Creating Private Networks

Host Network: Using the Host Network Inside a Container

For some kinds of applications, typically those that need to use low-level UDP-based protocols, you may need to access the network using the same IP and configuration used by processes running natively on the host OS.
In this case, you should enable host network mode.

  • select the + sign next to extraparms.
  • input network_mode as the key and host as the value.
  • Using the Host Network

All the ports exposed by applications running in your container will be exposed directly on the host network interfaces in host mode. This also means that you won't be able to expose services on ports already used by the host (for example, port 22 for SSH).

Hardware Access

Container technology is very popular in domains where direct access to the hardware is usually forbidden, like servers and cloud-based solutions. On the other side, software running on an embedded device will probably need to access hardware devices, for example, to collect data from sensors or to drive external machinery.

Docker provides ways to access specific hardware devices from inside a container, and often one needs to grant access from both a permission and a namespace perspective.

From a permission perspective, there are two ways to grant access to a container. You will hardly ever worry about those since they are either well documented or abstracted by our IDE Extensions. They are presented below:

  • Privileged: running a container in privileged mode is unnecessary, and when you do it, you lose the protection layer inherent to using containers. The entire host system is accessible from the container.
  • Control group (cgroup) rules: those rules give more granular access to some hardware components, solving the permission issue. We make use of those when it is required, but the VS Code extension abstracts them. If you've read our articles Debian Containers for Torizon or Using Multiple Containers with TorizonCore that are focused on the command-line, you might have already come across those.

From a namespace perspective, there are also two ways to grant access to a container. You must give them access on a per-peripheral basis.

  • Bind Mounts: you can share a file or a directory from the host to the container. Since devices are abstracted as files on Linux, bind mounts can expose devices inside the containers. When using pluggable devices, you might not know the exact mount point in advance and thus bind mount an entire directory. You can learn more about bind mounts in a previous section of this article about data storage.
  • Devices: this is a more granular method for adding devices to containers, and it is preferred over bind mounts. It is better for security since you avoid exposing, for instance, a storage device that may be erased by an attacker.

Torizon uses a coherent naming for the most commonly used hardware interfaces. For instance, a family-specific interface will have a name corresponding to the family name used in the datasheet and tables across our other articles. This helps you write containers and applications that are pin-compatible in software - if you switch the computer on module for another.

Inside Torizon base containers, there is a user called torizon mapped to several groups associated with hardware devices, including dialout, audio, video, gpio, i2cdev, spidev and input. That means using the torizon user, it's not necessary to be root to access different hardware interfaces like sound cards, displays, serial ports, gpio controllers, etc. So when developing your application to run inside a container, run it with the torizon user so the access to most hardware interfaces will work without requiring any additional privileges.

Sharing a Device Between Host and Container on VS Code

To share a device from the host OS into a container, you can:

  • press the + icon next to devices
  • provide the full path of the device (ex: /dev/colibri-i2c1)
  • Adding a Device

TorizonCore User Groups

The device will be mapped to the same path inside the container and use the same access rights specified for the host. Since the default user on the Toradex containers is torizon, and you should avoid using root as much as possible to limit potential security issues, you may have to add your user to specific groups to enable access for different kinds of devices. Those groups are mirrored between the host OS and our Debian-based container images, making things more intuitive.

The groups that are currently supported are listed in the table below:

group description
gpio allow access to the GPIO character device (/dev/gpiochip*), used by libgpiod
dialout allow access to UART interfaces
i2cdev allow access to I2C interfaces
spidev allow access to SPI interfaces
audio allow access to audio devices
video allow access to graphics and backlight interfaces
input allow access to input devices

Adding User to Groups on VS Code

To add the torizon user - or any other user - to groups, you can:

  • Expand the Custom Properties
  • Press the Edit button on the property buildcommands
  • Add the command RUN usermod -a -G <group 1>,<group 2>,... torizon
    • For example, to add access to GPIO and UART: RUN usermod -a -G gpio,dialout torizon
  • Adding a User to Groups

Sharing a Pluggable Device

Suppose your application needs to access devices that may be plugged/unplugged at runtime. In this situation, the static mapping will not work. There is no way, at the moment, to map a device into a running container. If you need to access this kind of device, the only solution is to mount the /dev folder as a volume.

Exceptions: When You Must Run as Root Inside the Container

There may be situations that cannot be easily worked around, and you need to run the application inside the container as root. However, notice that you should still avoid running the container as privileged, especially in this scenario. Here are some examples:

  • How to Use CAN on TorizonCore: the CAN interface is abstracted as a network interface. Since NetworkManager does not support configuring the CAN, you must use iproute2 and therefore run as root.

Graphical User Interface (GUI)

The default choice for Graphic User Interface (GUI) in Torizon is to use the Wayland protocol. This requires a compositor that manages the screen and input devices and allows clients to access "surfaces" on the screen. Torizon provides a container with the Weston compositor and clear instructions about how to run it on Debian Containers for Torizon.

Starting Weston Automatically on Your VS Code Project

Suppose you want to run it automatically when debugging your application. In that case, you can create a docker-compose file inside your appconfig folder and add its name to the dockercomposefile configuration property.
You can find a sample of such a file on Debian Containers for Torizon.

Your container must communicate with the Weston compositor. This is done using a socket that, by default, is created under /tmp. Mounting /tmp as a volume in your container will let your application interact with Weston.

GUI Frameworks/Toolkits and Wayland

Many commonly used GUI-toolkits support Wayland as a rendering back end. Usually, you need to set an environment variable that configures rendering mode.
In the following table, you can find configuration settings for some popular toolkits:

Toolkit Env setting
GTK3 GDK_BACKEND="wayland"
Qt5 QT_QPA_PLATFORM="wayland"
SDL2 SDL_VIDEODRIVER="wayland"

To set an environment variable:

  • Click on env under Custom Properties in the configuration view
  • Add the ENV statement as you would do in a Dockerfile: ENV GDK_BACKEND="wayland"
  • Adding an Environment Variable