Running Multiple OS on the Same Embedded CPU – How OEMs Can Overcome the Chip Shortage Crisis?

As technology advances, especially in the Automotive, Consumer Electronics, and Medical sectors, chips are embedded in almost every aspect of daily lives, from automotive infotainment to Smart Surgical Screen for OR. Due to the pandemic, a slowdown in semiconductor manufacturing led to a chip shortage predicted to end of 2023. Still, the issue has continued and interfered with the production of common technology. As more countries lift restrictions and the economy returns, manufacturers for Embedded devices and other technology can also return to normal. But post-pandemic conditions left a chip shortage, limiting manufacturers’ ability to build and distribute products. For engineers, product managers, and developers, this might seem like a difficult hurdle, but there are alternatives – virtualization and containers. Both solutions can be used in embedded Linux systems to provide support for multiple OS to keep your application deployment deadlines on track.

Containers and Embedded Linux

Using a containerized environment in embedded Linux is different from its desktop counterparts due to limited resources on embedded systems. Containers are marketed as a lightweight, more flexible option to the hypervisor, and developers can isolate applications from others running on the same system. Although not the same as chroot, containers in Linux are similar where processes are isolated from other applications and give applications a dedicated namespace, memory, and network resources. Containers are isolated from others, but they can still communicate with each other using their API.  This communication between containers can be subject to fine-grained security policies (roughly comparable to a firewall in networking technologies).

In development, containers are much more convenient and versatile than virtual machines. A developer can create a container and deploy it to various systems, including their local Embedded development environment, or target embedded hardware. Configurations for the container and the custom environment setup are packaged with it, so nothing more than deploying the container is necessary. In other words, the target device no longer needs to be configured individually after deploying the application. The same can be done with an embedded Linux system across chipsets, emulators, and devices.

Another advantage of containers is that you can create and remove them as needed, so maintaining them is much easier. Instead of managing the application with installed updates, an application container can be removed and recreated for each new version, update, or patch. This time-saving benefit has made containers a preferable choice for many application engineers and development teams where multiple OS, applications, and versions are deployed.

For manufacturers working with embedded Linux, containers can help alleviate the struggle over a chip shortage by providing a flexible application deployment solution. To host multiple OS, however, containers are used in conjunction with hypervisor or virtualization.

Hypervisor, Virtual Machines, and Embedded Linux

Although there are many reasons to use containers in development, some OEMs still prefer virtualization. One disadvantage of containers is that they share resources with the underlying operating system, so they must be designed for each OS. Virtualization is beneficial to developers that have multiple applications that must be run on individual operating systems. Developers engineer and test applications using a virtualized environment to emulate each targeted device and its OS.

Virtualization allows applications to run on abstract hardware instances, so it’s beneficial if developers have a single application that must interact with the operating system as if it’s running on dedicated hardware. This can be a bit more “bulky” in terms of resource usage, but it still allows developers to run multiple OS on a single device. Since chips are currently limited, it’s a huge benefit for manufacturers struggling to maintain deployment deadlines without the hardware necessary for their traditional engineering and QA testing.

In embedded Linux, Kernel-based Virtual Machine (KVM) is commonly used to support virtualization. KVM enables virtualized environments to run on the same physical hardware, and it’s already a native part of the Linux kernel that provides developers with a solution to run instances of each virtual machine with its own memory, CPU, and network resources. 

Virtual machines are used in addition to containers where multiple operating systems are necessary. One underlying issue with containers is that the OS kernel version must be the same for all containers supported, for example, this is how you can keep running Android container on top of Embedded Linux OS. In this complicated scenario and in order to go beyond these limitations, OEMs sometimes use virtual machines to virtualize the hardware and then deploy containers configured for a specific operating system running on individual instances. Developers get the best of both worlds where multiple operating systems can run using a hypervisor, and then containers can be used to deploy isolated applications.

How Does Virtualization and Containers Help During the Chip Shortage?

It’s reported that the chip shortage could last until 2023. This means that application developers targeting embedded systems must find ways to continue product development with fewer chips available for testing. Virtual machine and container technology make it possible to test applications on multiple operating systems on a single physical hardware device. L4B Software works with both technologies so that manufacturers can stay on track with development deadlines and support embedded applications for as long as the chip shortage continues.

New call-to-action