SGO Mistika 8.6 Totem Configuration.
Totem Configuration Procedure.
Totem is a cluster render technology that permits to use several render nodes to render a same clip.
It is only available on Linux platforms.
The main purpose is to control all available systems from a hero suite during client attended sessions, to "render one clip as fast as possible" for realtime playback purposes, by using all the available resources at the same time. The Totem special design also permits to playback the clip while it is being rendered.
Please note that this is different to Mistika BatchManager, which is render dispatcher:
- BatchManager will get independent render jobs from render queues and send each one to next available node. As a difference to totem It supports all render formats, and it is optimised for best global render speed (the render time for the total number of clips). To do this, each render job is only sent to one node. BatchManager works as a background service, unattended
- Meanwhile, Totem only supports Mistika .js format and a few other formats, but it can use all nodes to collaborate on a same clip. Totem is launched at user request from Mistika interface:
- Render->Totem (background mode uses all GPUs from all systems. ,While Foreground mode will exclude the GPU currently used by your Mistika session),
- Edit->PlaybackCache->RenderWithTotem (provide functions for all combinations).
Note: A "totem render node" is not necessarily a computer, although it is the normal case for single GPU systems. But if one computer has more than one GPU, the totem interface permits to configure each GPU as a separate render node (recommended).
This document is focused on the Totem configuration, but it is recommended to configure BatchManager first. This is because BatchManager will help you to configure all the basic network settings (/etc/hosts, ssh service, path shares...), so it will be much easier to configure Totem later
NOTE; Totem can accelerate render speed in linear manner ( For example, using 5 render nodes can render a clip up to 5 times faster), but only if the storage can provide enough bandwidth for all nodes. The storage access is the most important point when planning a Totem infrastructure. For example, rendering Playback Cache with Totem can be much faster with a small SSD storage dedicated to the Playback cache.
In general, mConfig will configure most aspects automatically. So we recommend to just try to use it and see what happens. Then, if it does not work or if you need a customised configuration you will need to check these aspects:
Check Totem license
Since mistika 8.6, the totem license is installed as a separate line in the same license file as mistika ( /var/flexlm/sgoLicenseV5.dat ), clearly identified with the "TOTEM" feature. If you do not have it this line, then it is the first thing to solve.
Note: totem licenses are floating. If you do not have a license for the render node, you can tell it to get it from the license server of another computer, by doing this:
For example, If the ip of the computer serving the licenses is 192.168.1.1, then:
For render nodes with centOS 7.2 or later, put this line in the .bashrc file
For render nodes with Suse11 sp3, put this line in the .cshrc file
setenv SGO_ELMHOST 192.168.1.1
Note: No action is required in the license server. All these licenses are floating licenses by default.
The hostname can not have a domain on it (no dot "." characters in the hostname). Execute "hostname" in a console to see the current hostname, and check this.
mistika-1 is a correct hostname
mistika-1.sgo.es is not a correct hostname, because it has the domain (sgo.es) as part of the hostname
Instead, if you need to resolve hostnames like this let the hostname to be mistika-1 and just put both names in a same line in the /etc/hosts file:
188.8.131.522 mistika1 mistika1.sgo.es
Check Hosts file.
In all computers (both Mistika and totems) edit the file /etc/hosts file as follows:
Check the local hostname and local IP with the command hostname and hostname -i
Edit the file /etc/hosts file to ensure you find the line to match the IP with the hostname. For example, if the hostname is mistika1 and the ip is 184.108.40.2062, all the systems will need to have this line in the /etc/hosts:
And make sure that mistika1 does not appear in any other line.
Add a new line for each remote machine to use it as Mistika or Totem.
NOTE: The assignment of IP should be fixed and not by DHCP. This can be checked in Suse11 with yast, Network Devices, Network Settings , and using the network manager ( network icon in the linux bar) in the case of CentOS 7.x
Those steps have to be done in all systems, with exactly the same lines in the /etc/hosts. (Each computer hostname must appear with identical name in the /etc/hosts of the other computers)
Check that all computers can "ping" the others. In the previous exampe, this command:
must work from all computers (including mistika1).
Once the network is working at "ping" level and all the hostnames are resolved, then you need to make sure that the "ssh" service permits to connect the main mistika user to all the other computers in order to login on them and launch remote commands.
For example, if you want to use Totem in a mistika session runnning in mistika1 system, and you want to use render1 and rende2 nodes to render, these commands need to work without error when executed from mistika1:
If they don't, then the easiest way to fix it is to configure BatchManager in all the systems, which will create ssh keys for all of them and pass the keys between the systems, so they can send render commands between them
To do that, in mConfig open the BatchManager tab and activate the Use Batch Manager checkbox. This action will trigger an automatic process to enable the communication between the render machines. You need to do it in all computers, indicating the same folder for the render queues root folder. This will prepare all the computers to act as render nodes and create ssh keys for them.
Once this is finished, then you will need to open mConfig a second time in all systems. In this second round, mConfig will realise that there are new render nodes available (when opening mConfig, a dialog will pop up indicating the new render nodes that are found), and it will get the ssh keys from them .
Note: For more details check BatchManager or ssh documentation.
Totem settings in mConfig.
Open the TOTEM tab and activate the Use Totem checkbox.
Click on Manage Totem Nodes
Delete all the lines with errors in case they exists. (if it is a new ssytem, they are typically example lines or factory tests)
Add the new nodes typing the hostname, choose GPU unit, parallel Instances, activate Use and press Add.
Note: If a system has two GPUs, add one totem node for each one, both with the same hostname.
The system will check the operability of that node. In case of errors, a See log button permits to anaylise what happened
Note: There are two parameters affecting render performance.
- ClusterRenderUnits is the number of frames ordered to each node together (default is 4). Using a few of them helps to optmise disk IO performance and optical flow effects. But setting it too high is not efficient, as each node may receve a different load (specialy with small clips), and becauese you need more time to start a safe playback of the clip (Totem permits to playback while the clip is still rendering)
- Instances is the number of independent render nodes created in the computer. In a sense, it is like having several virtual render nodes on a same computer. Using a few of them may help in systems with many CPU cores if the codec used by the source images are not well paralelised. However, parallel instances need to share the GPU at the same time, so this value should not be increased without a reason.
Using Totem in Mistika.
In the render panel, choose Mistika .js as the render format
The Totem tab on the left side of Output panel shows the list of nodes and theirs status (one per instance). They should appear as "Ready"
To use the available Totem nodes, activate the Totem button on the right side of Output panel. If it is active, posterior renders will try to use all the active totems in this way:
- When doing a foregorund render all the totem nodes will be used, incluiding the local node where mistika is running
- When doing a background render, all nodes are used except the local node (the one where mistika is running)
In addition to render processes launched with the render panel, Totem can also be used for rendering PlaybackCache files (using totem submenu options under the Edit->PlaybackCache menu) . However, it is recommended to test Totem with the render panel before trying to make it work with the playback cache, as it is easier more interactive to see what is going on.