LNST in containers

LNST supports running both agents and controller in containers at the host machine. LNST uses custom RPC protocol to communicate between controller and agents which uses separate network interface to not interfere with test and so, it doesn’t matter where controller and agents are running as long as they can communicate.

With support of running LNST in containers your machine setup might look like this:

  1. both controller and agents are running on your baremetal machines

  2. controller is running on your baremetal machine and agents are running in containers

  3. controller is running in container and agents are running on your baremetal machine

  4. both controller and agents are running in containers

This article describes how to run individual parts of LNST in containers. If you want to run either controller or agents on baremetal see Install LNST and Hello world section.

Common requirements

We recommend to use Podman as this was developed and tested with Podman but should work with Docker as well.

If you want to use Podman, follow installation steps on official Podman installation page.

Containerized agents

Containers and networks are dynamically created based on recipe requirements. Containers are also automatically connected to networks.

Requirements

The first requirement is Podman, follow installation steps on official Podman installation page.

Container Networking

LNST ContainerPoolManager supports 3 different plugins to generate the networks defined in Recipe Requirements.

The first two plugins are podman native plugins netavark and CNI.

The final plugin is the recently added custom_lnst. This plugin has no requirements on your system or it’s configuration, it utilizes ip-route commands to build a network using veth and bridge interfaces manually. So the only requirement here is to run the recipe with root privileges.

To select which plugin is used for networking you can change the value of the network_plugin argument when creating the Controller class:

ctl = Controller(
    poolMgr=ContainerPoolManager,
    mapper=ContainerMapper,
    network_plugin="custom_lnst",
)

CNI system configuration

To use CNI you may need to adjust the systemwide container configurations.

  1. Remove all the LNST’s leftovers from /etc/cni/net.d/lnst_*.conflist if you already tried to run LNST

2. Set default network backend to cni, add following block to containers.conf - documentation:

[network]
network_backend="cni"
  1. Install CNI plugins:

dnf install containernetworking-plugins
  1. Start your Podman instance

Enabling Podman API service:

Podman API is also required, follow the steps below:

systemctl enable --now podman.socket

and get socket URL:

systemctl status podman.socket | grep "Listen:"

Starting Podman API manually:

If you don’t want to run Podman API as a service, you can start it manually. Don’t forget to run the command below with root privileges.

podman system service --timeout 0 --log-level=debug

Socket URL could be found at the top of logs generated by this command.

The usual URL is unix:/run/podman/podman.sock

Build LNST agent image

Currently, LNST does not support automated building, so build LNST agent machine image.

Podman uses different storage locations for root-full and root-less images, so make sure you build image to root-full storage. LNST currently uses the default storage location. Build context should be a directory, where your LNST project is located.

Your local copy of LNST is used by agents in containers.

Use -t argument to name your image, this name is later used.

cd your_lnst_project_directory
podman build . -t lnst -f container_files/agent/Dockerfile

Now is everything ready to run LNST in containers. For testing purposes, we can use HelloWorldRecipe from Creating an executable “HelloWorld” test script.

Only initialization of Controller() object has to be changed:

from lnst.Controller.MachineMapper import ContainerMapper
from lnst.Controller.ContainerPoolManager import ContainerPoolManager

podman_uri = ""  # podman URI from installation step above
image_name = ""  # name of image from build step above
ctl = Controller(poolMgr=ContainerPoolManager, mapper=ContainerMapper, podman_uri=podman_uri, image=image_name)

And run the script.

Classes documentation

class lnst.Controller.MachineMapper.ContainerMapper

Implements simple matching algorithm that maps containers to requirements. Containers are created in lnst.Controller.ContainerPoolManager.ContainerPoolManager using requirements.

set_pools_manager(pool_manager)

lnst.Controller.MachineMapper.ContainerMapper does not support multiple pools but it requires pool manager.

matches()

1:1 mapping of containers to requirements

class lnst.Controller.ContainerPoolManager.ContainerPoolManager(pools, msg_dispatcher, ctl_config, podman_uri, image, network_plugin='netavark', pool_checks=True)

This class implements managing containers and networks. It uses Podman API to handle operations with containers, the API needs to be running with root privileges.

Parameters:
  • pools (dict) – Dictionary that contains pools. In lnst.Controller.ContainerPoolManager.ContainerPoolManager are pools dynamically created based on recipe requirements. That means this parameter is not used but it is needed to keep parameters of this class and lnst.Controller.AgentPoolManager.AgentPoolManager the same.

  • msg_dispatcher (lnst.Controller.MessageDispatcher.MessageDispatcher)

  • ctl_config (lnst.Controller.Config.CtlConfig)

  • pool_checks (boolean (default True)) – if False, will disable checking the online status of Agents

  • podman_uri (str) – Mandatory parameter

  • image (str) – Mandatory parameter

  • network_plugin (Optional[str]) – Podman network plugin, ‘cni’, ‘netavark’ or ‘custom_lnst’, if unset, the network backend is auto-detected

process_reqs(mreqs: dict)

This method is called by lnst.Controller.MachineMapper.ContainerMapper, it is responsible for creating containers and networks.

Containerized controller

Using containerized agents

Before proceeding with containerized controller, you need to build LNST agent image (see Containerized agents) you also need to provide following parameters as environment variables to controller container:

  • PODMAN_URI - URI to Podman socket, e.g. tcp://localhost[:port]. This needs to be accessible from container.

  • IMAGE_NAME - name of the image you built for agents.

It expects that you use CNI as network backend for Podman.

Using baremetal agents

Firstly, you need to prepare machine XMLs if you decide to run agents on baremetal machines (see Creating a simple machine pool). Instead of putting them into ~/.lnst/pool directory, you need to put them into container_files/controller/pool directory. Machine XMLs are copied to container during build process from container_files/controller/pool. Podman doesn’t support copying files located outside of build context, so you need to put it to LNST project directory.

Note

To avoid having to deal with pool files you can simply mount your ~/.lnst/pool directory to /root/.lnst/pool/ in the container (read-only access is sufficient).

Build and run controller

Build the controller image:

cd your_lnst_project_directory
podman build . -t lnst_controller -f container_files/controller/Dockerfile

This will copy pool files to /root/.lnst/pool/ in container and LNST from your_lnst_project_directory to /lnst in container.

Note

If you want to avoid rebuilding the image every time you change your LNST project (e.g. during development), you can mount your_lnst_project_directory to /lnst in container. The LNST’s virtual environment is located outside of /lnst/ directory, so if your changes requires reinstallation fo LNST and/or its dependencies, you need to rebuild the image.

Before running the container, you need to provide environment variables:

  • RECIPE - name of recipe class, these are loaded from lnst.Recipes.ENRT as wildcard import

  • RECIPE_PARAMS - ; separated list of parameters for recipe. Each parameter is in format key=value

  • FORMATTERS - ; separated list of formatters, these are loaded from lnst.Formatters as wildcard import

  • DEBUG - enables/disables LNST’s debug mode

Warning

RECIPE, RECIPE_PARAMS and FORMATTERS are parsed using Python’s eval function, which is a security risk. Make sure you trust the source of these variables.

Now, you can run the controller:

podman run -e RECIPE=SimpleNetworkRecipe -e RECIPE_PARAMS="perf_iterations=1;perf_duration=10" -e DEBUG=1 --rm --name lnst_controller lnst_controller

Note

Podman containers are by default NATed, so you may need to use some other –network mode to make agent machines reachable from controller container. If agent machines are reachable from your host machine, –network=host should do the job. Read Podman’s documentation first.

Or you can run more complex recipes:

podman run -e RECIPE=XDPDropRecipe -e RECIPE_PARAMS="perf_iterations=1;perf_tool_cpu=[0,1];multi_dev_interrupt_config={'host1':{'eth0':{'cpus':[0],'policy':'round-robin'}}}" --rm --name lnst_controller lnst_controller

Classes documentation

class container_files.controller.container_runner.ContainerRunner

This class is responsible for running the LNST controller in a container.

Environment variables:

  • DEBUG: Set to 1 to enable debug mode

  • RECIPE: Name of the recipe class to run

  • RECIPE_PARAMS: Parameters to pass to the recipe class

  • FORMATTERS: List of formatters to use

  • MULTIMATCH: Set to 1 to enable multimatch mode

Agents in containers-specific environment variables:

  • PODMAN_URI: URI of the Podman socket

  • IMAGE_NAME: Name of the container image

run() ResultType

Initialize recipe class with parameters provided in RECIPE_PARAMS and execute. Function returns overall result.