The optimal hardware for Mistika Workflows mainly depends on the input and output media (mainly codecs and resolution), so it is not possible to provide a specific configuration that is optimal for all cases.

 

But the next points will help you to select optimal hardware for specific projects. This is specially important when using multiple systems, where it is critical to fine tune component selection before replicating the hardware configuration.


First of all, note at any moment in time everything will run at the speed of the slowest component for that task, so the key aspect is to achieve a good balance (improve the slowest components , or at least not to pay for high end components that will be bottlenecked by others anyway, ). There two main aspects to consider:


- Computer specifications


- Storage and connectivity speed.


Now we will go into more detail about both aspects.



A) Computer specifications


A.1) Generic recommendations


GPU: A first requirement for Mistika Workflows is to have an NVIDIA GPU. Other GPUs are not yet supported. We recommend Pascal series and later generations (the more modern the better, either GeForce or Professional models dpending on your needs). The GPU will take care of:


- Hardware decoding R3D and Arri files


- Hardware encoding of supported formats: H264 & H265 (that is for encoding only, in Mistika the H264/H265 decoding is done by CPU, in parallel to the GPU encoding of the previous frames)


- Image processing (Rescaling, Color transformations, Display filters & effects, compositing, optical flow, stitching, etc)


CPU:  AMD or Intel (all models supported). The CPU will take care of:


- Image decoding and encoding (except fot NVidia hardware codecs from the previous point). Being the most CPU intensive formats the EXR compressed (many variants), j2k, and XAVC. Those formats can really take advantage of high end CPUs.  Meanwhile, ProRes and most other codecs are also CPU intensive but not so much.  In any case, always look for CPUs with the best "All-core" frequency, as it is better than than having many slow cores.    


- Metadata management. This can be a lot of work when using enumerated sequences and big collections of files.  For this purpose "single core" frequency is also important, not only All-core.


RAM: In general Mistika Workflows does not require too much RAM, but note that storage access may benefit from extra RAM for local caching, depending on the filesytem (or NAS settings ) in use.


A.2) Taking practical benchmarks for a reference workflow


For this point we recommend to do your initial benchmarks using local NVMe drives only  (both input media and output media), to avoid computer benchmarks being bottlenecked by storage speed or connectivity bandwidth, which will be studied in point B.


Run your reference workflows at full load and open Task Manager (or activity monitor) to observe behaviour:


- Identify which component is closer to 100% usage most of the time, CPU or GPU, as that will tell you which component is more important for you to invest. (If none of them show high usage then your bottleneck is elsewhere (probably disk and connectivity.


 Note that "latency" is also a common speed limiter, but it is not shown as high activity of any component. (We will talk more about this in point B) 


- Check RAM usage in the worst case scenario.  Check that the RAM usage does not exceed physical memory, as page swapping produce a significant impact in performance.  On contrary, if you see a lot of free memory all the time you can reduce costs in this component.


- Check GPU usage of "shared memory".  The "shared" memory should stay very low all the time, as using it (using RAM when GPU grahics memory is exhausted ) also produce a significant impact in performance.  


- Check GPU usage of "dedicated memory".  If it never hoes high you can reduce costs by using a GPU with less memory. 


- In the task manager, find what is the fastest disk speed while rendering from/to your NVMe drives (for the most significant processes in terms of rendering time).  As that is the target storage and network speed that you will want to get from the final storage in the next point. 

 

B) Storage and conectivity


Once you get the get reference neccesary speed (from point A), now it is time to compare the render times obtained in point A (made on local NVMe drives) with your intended storage (if you plan to use inferior local disks or NAS/SAN storage ). That will tell you how much speed is lost due to connectivity and storage limitations.


As a difference to point A, note that what we need to identify here is not the slowest process in the workflow but the fastest one, that is, what nodes are reading or writing more frames per second at render time, as we will have to feed them from the network fast enough.


When using multiple systems, in general using NAS at 10Gb ethernet speeds is the sweet spot in terms of costs, as Mistika can easily use the full 10Gb nominal bandwidth. But if you still see a big render time difference compared with local storage then a lot of computer resources will be wasted. Either reduce computer specs to save costs (but use more computers if needed) or either improve the network and storage performance (sometimes a SAN can be worth it, ask SGO support about the options)


Alternatively, you can also calculate the storage speed needs for the most significant node in your workflow with this formula: 


For storage drives you need this:


Max fps render speed (obtained from point A) *  ( Flesize per frame of all input layers + Filesize per frame of output media ) 


For network connectivity  (full duplex):


Max fps render speed (obtained from point A)  *  Max ( Filesize per frame of all input layers + Filesize per frame of output media )


Finally there is another key aspect to consider, which is "latency".  In general network latency is much higher than when using local storage (specially when using SSD storage), and it does not matter if you use 1Gb, 10Gb, or 100Gb ethernet (they all have similar latency). Latency does not matter at all when accesing relatively big media files (over a hundred megabytes per file or so), but it can be a serious problem if many thousands of much smaller files are involved in your workflow. In those cases latency can really become the main bottleneck (most of the render time is wasted in requesting a few data from a small file to the server and waiting for it to answer).  Sometimes this can be improved by creating local caches, for example by using Mistika Workflows to prefetch all those little files locally and then deleting them automatically after workflow completion, or by using other 3rd party tools with caching capabilities.