This article explains how to install Mistika Workflows inside a docker container on a Linux headless server, with the main purpose to execute workflows via CLI commands (such as for render nodes and for unattended processing of workflows in general). 


Note: if you plan to install the Workflows container in a linux desktop (with access to the GUI) rather than using a headless server then go to this other article about Mistika docker on linux desktops. And if you just want to configure a license server but not to run workflows in the server, then follow this article about using Mistika docker on license servers.


This particular example is for  Ubuntu server, but it should be pretty similar in other linux "server" distros.


PREREQUISITES:


Install an NVIDIA GPU in your server, if not yet (Mistika Workflows on linux can not work without it). In general it can be a small model, but if it is old check that modern drivers still support it.


Install Ubuntu server : For this example let's suppose that you have already installed it  using the default settings, and you have created a user named "mistika"  which you have already logged in via ssh  (all commands below appearing without sudo are executed as the user mistika... )


CONFIGURATION INSTRUCTIONS:


1. Install NVIDIA driver, (if not yet):


sudo ubuntu-drivers install

sudo reboot


Check that the gpu is regognized:


  nvidia-smi  


(it should show your gpu model)


2. Install Docker and give access to your user


curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

sudo usermod -aG docker $USER

sudo reboot



3. Install nvidia-container-toolkit for docker


(Note: this is based on nvidia official instructions here , you may want to follow that link for getting newer versions ...)

 

sudo apt-get update && sudo apt-get install -y --no-install-recommends ca-certificates curl gnupg2


curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' |  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list


sudo apt-get update

export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.18.2-1


sudo apt-get install -y       nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION}  nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION}libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}


sudo nvidia-ctk runtime configure --runtime=docker 

sudo systemctl restart docker 

sudo nvidia-ctk runtime configure --runtime=containerd 

sudo systemctl restart containerd



4. Install Mistika Workflows docker image


Create a text file named compose.yml, and copy/paste the compose.yml content provided in the mistikaworkflows docker  hub repository:  https://hub.docker.com/r/sgomistika/mistikaworkflows


Note that the default compose.yml file only configures a few internal volumes (those are to make changes to the configuration files persistent).  In this file you will also need to change some settings, mainly about licensing and giving access to your media volumes (please follow the instructions from the repository overview


Once you have the compose.yml configured, we will create it's containers and volumes, also downloading Mistika images:


docker compose up -d


The first time you execute this it will take a long while donwloading the docker image. But posterior executions should only take an instant.


That action will create two containers, sgoLicenseServer and MistikaWorkflows. The license server process (sgoLicenseServer.bin)  is executed automatically in the first container, but the workflows application is not started on its own just yet (we will do it in the next point):


If the local host is going to be the license server, then go to point 3 of this article about the docker license server. Once you have activated your license you can continue with the points below.



5. Install X11 software 


sudo apt-get update

sudo apt install -y mesa-utils xorg openbox mesa-utils xorg openbox x11vnc dolphin breeze-icon-theme

sudo nvidia-xconfig --allow-empty-initial-configuration


A headless server does not have a monitor, so we have to create a virtual one. Edit the xorg.conf file:


sudo nano /etc/X11/xorg.conf


In the Device section, add an option "ConnectedMonitor" "DFP-0", so it should lool like this:


Section "Device"

Identifier "Device0"
Driver "nvidia"
Option "ConnectedMonitor" "DFP-0"
VendorName "NVIDIA Corporation"
BusID "PCI:1:0:0"

EndSection


We need to start the Xorg & windows manager software that we have just installed (this is needed in order to obtain an OpenGL context). We will execute it without listening to any network ports (for security reasons):


sudo -b Xorg -nolisten tcp 

export DISPLAY=:0

sudo -b openbox


(later we will see how to automate this on startup, but first we are going to test that it works)



6. Rendering a first example workflow in the container


At this point we should be ready to execute workflows renders via the CLI, so we are going to render a little  workflow example that comes with the docker image (TestWorkflow-denoise.mwf), which will use the nvidia GPU to denoise an example clip. So it is a good test to confirm that our GPU configuration is ready. 


Let's check that we have the sample clip (testclip_h264.mp4) inside the container internal volume (/opt/sgo/sgodata ...):


docker exec MistikaWorkflows ls  /opt/sgo/sgodata/Media
testclip_h264.mp4


Let's render the workflow:


docker exec MistikaWorkflows workflows -r  /opt/sgo/sgodata/Projects/TestWorkflow-denoise.mwf


Finally let's check that the rendered clip (testclip_h264_denoised.mp4) has appeared in the same folder as the original:


docker exec MistikaWorkflows ls  /opt/sgo/sgodata/Media

testclip_h264.mp4

testclip_h264_denoised.mp4


That's it, if you want to get the rendered file to check it you can copy it from the container volume elsewhere:


 docker cp MistikaWorkflows:/opt/sgo/sgodata/Media/testclip_h264_denoised.mp4 /tmp/.



7. Automate startup (xorg & openbox services)


Once tested, we can automate Xorg & openbox startup at boot time by creating a couple of systemd services:


File:  /etc/systemd/system/xorg.service 


Content:


[Unit]
Description=Start Xorg Server
After=network.target

After=systemd-modules-load.service
After=sys-bus-pci-drivers-nvidia.device
Requires=sys-bus-pci-drivers-nvidia.device


[Service]
ExecStart=/usr/bin/Xorg :0 -nolisten tcp -logfile /var/log/Xorg.0.log
User=root

[Install]
WantedBy=multi-user.target


File:  /etc/systemd/system/openbox.service 


Content:


[Unit]
Description=Start openbox
After=xorg.service


[Service]

Environment="DISPLAY=:0"

ExecStartPre=/usr/bin/sleep 6
ExecStart=/usr/bin/openbox
User=root

[Install]
WantedBy=multi-user.target


Save both files and then execute these commands:


sudo systemctl daemon-reload

sudo systemctl enable xorg

sudo systemctl enable openbox

sudo reboot


After the reboot, repeat the render test as we did before. If it works the configuration is finished.



8 - The workflows script: For further automation, we are going to create a "workflows" script emulating workflows native syntax (like if we didn't have docker). Create a "workflows" text file with this content:


File: workflows


Content:

#!/bin/bash
xhost +
docker exec MistikaWorkflows workflows "$@"


Install the new script:

chmod a+x workflows

sudo cp -f workflows /usr/bin/.


Now you can execute workflows like you would do on native installations without docker, even after reboots. 


9 - Typical use cases for the workflows CLI:


Use case 1: To render a mwf file:


It should be trivial after the steps above. Example:


workflows -r  /opt/sgo/sgodata/Projects/TestWorkflow-denoise.mwf



Use case 2:  Launch workflows as a server waiting for REST-API commands (or acting as a Runner server):


First, check that the server mode is working. Just execute workflows in quite mode, as we do not need the GUI:


workflows -q  


And try to connect from any other client. If it does not work you may need to open firewall ports. 


Once it works, create a daemon to execute it at startup. In our previous instructions we have used openbox as a windows manager, so we have to wait for it.  In our example we will run it as "mistika" user (change it to suite your needs):


File:  /etc/systemd/system/workflows.service 


Content:


[Unit]
Description=Start mistika workflows
After=openbox
Requires=sys-bus-pci-drivers-nvidia.device

[Service]
ExecStart=/usr/bin/docker exec MistikaWorkflows workflows -q
User=mistika

[Install]
WantedBy=multi-user.target

And after saving it:


sudo systemctl daemon-reload

sudo systemctl enable workflows

sudo reboot


Troubleshooting:


If the xorg service that we have created fails to start it could be that the nvidia device is different in your particular configuration, you can query the exact name with this command:  


systemctl list-units --type=device | grep nvidia


Once you have the Xorg working, given that this particular case uses a headless server you could not see the workflows GUI or potential error dialogs directly. If you want to start a remote desktop session temporally (to run a workflows session with GUI ), then this article explains how to do it via x11vnc: https://sgosupport.freshdesk.com/a/solutions/articles/1000333627