Note: This document only covers questions that do not have a dedicated section. For example, questions about licenses are not included here, but in the license articles.

Recent questions from new users:

QUESTION:  Why my render is slow?

Answer:  MistikaVR needs a lot of system resources, and it can take long to render at high resolutions (rendering at few frames per second is normal in complex cases). But if you think your render is abnormally slow this is a recommended approach to study the problem (at least to start with): 

Simplifying a bit, rendering involves 3 processes happening at the same time. Decoding the camera rushes, stitching them, and encoding the render format. So first thing you need to know what is the contribution of each one to the render time.

First reboot the system to start in a clean situation. Then:

1 - Deactivate OF (Optical Flow) and measure how much time it takes to just playback the scene. Most of that time is reading the cameras and decoding their particular format, as the actual stitching time without OF is normally irrelevant compared with decoding the camera images.  

2 - Now activate OF and repeat the playback. The time difference with step 1 is the OF contribution to the total time. And it is only dependent on your GPU.

3 - Now do the render to your desired format (ideally try a few formats). The time difference with step 2 is the time contribution of the encoding and writing to disk. 

Also, if you compare each of the 3 processes in the task manager (or similar performance monitoring tools) you will know what resources are taken by each of those processes. 

Please note that to optimize the render, Mistika will always try to do those three processes at the same time (with each one working on different frames at the same time).  But logically that will only work well if the system has enough resources for that  ( enough CPU cores, RAM, GPU memory, and disk speed). 

Looking at those numbers now you should have a clear idea about what component is slowing down your system, and they will also help to diagnostic abnormal problems. For example:

- In normal cases the render should only take a bit more time than step 2 (OF playback). If it takes much more time then it usually means that you need more RAM or more CPU cores (check that in the task manager).

- If you find that the bottleneck is in step-2 then it means your GPU is not powerful enough. Try to upgrade it if possible. It may also help to activate File->Options->Minimize GPU RAM usage

- If you find no clear bottleneck (no single resource above 75% or more) then you may need to optimize some settings manually. Normally Mistika will find optimal settings automatically, to render as fast as possible but also without compromising stability, but in some cases it can be too prudent or too agressive. Advanced manual optimizations are described in this document.  

QUESTION:  Moving the edge points do not seem to affect the image.

Answer:  It may happen if you have accidentaly activated the stereo view for one of the eyes but your camera is 2D. Check your stereo view setting and select No Stereo if you are not working in Stereo 3D. (otherwise select Left Eye or Right Eye accordingly)

QUESTION:  The rendered clip has no audio.

Answer:  You need to activate the speaker icon of one of the clips (in the clip stack at the left).  Some of the audio tracks can be muted by default to avoid undesired audio overlaps.  But if all the clips containing audio are muted (or the active ones have silent tracks ) then you will have no audio. 

Alternatively, after the stitch you can load another clip containing the audio you want, for example to substitute the audio from the cameras. First disable the input camera of the new clip, as in this case you only want its audio track.  Mute the audio of all the other clips , and activate the speaker icon of the new clip. 

As an example, a particular case is the Insta Pro 2 camera, which saves the audio in a separated file called origin_6_lrv.mp4. If you need  to export high-resolution files with this audio track you can proceed as follows: 

1. Import the high-resolution files.
2. Do the stitching jobs.
3. Once the stitching is ready, import the origin_6_lrv.mp4 file as camera number 7.
4. Select in the clip stack the origin_6_lrv.mp4 file.
5. Go to 'Input camera' control tab and do click on 'enable' to turn off the camera. That camera will not be used for images but its audio speaker icon can be selected in the clip stack for rendering purposes.

QUESTION:  Why there is no preset for Insta Titan Pro camera rig?

Answer:  There is no need for it, this camera provides a pro.prj file with all the calibration data. To use it, load your camera images and then drag & drop the into MistikaVR.

QUESTION:  The rendered files from MistikaVR are not working when I load them in other application (Premiere...)

Answer:  When selecting the render codec in MistikaVR use the default settings, as they are set for maximum compatibility and tested in most common applications. If it works in that way then you can complicate things later

QUESTION:  How to select the project resolution? /  How to create a new render resolution?

Answer:  It is explained in this article

QUESTION:  How to work with 4K+ GUI monitors

This is already automated since Mistika VR 8.8.2,  which provides dedicated scaling menus to configure your monitors in the preferences menu.  

QUESTION:  How to sync the clips by using their audio tracks ? 

Answer: The Align by audio tool has these parameters to adjust (in case it does not work well at the first attempt):

- Search length is the length of the sound sample to compare, centered at the current frame position. 

- Maximum offset is how much the cameras may be off sync, to avoid false matches at unlikely large offsets. If you turn on the cameras one by one, think how many seconds may be out of sync, and fill that number as the maximum. Set the current frame at a zone where there is some identifiable noise (clap, people talking), and run the match.

- Sample size - We recommend to let it on default. The audio needs to be split to some bits - "windows" that can be matched - and this is the size of these bits. smaller means more precise on transients, but lower frequencies may get ignored with very small windows. Keep it on 2 - 4 range, these work best. You should probably ignore this field, tweaking it usually does not change the result or make it worst, only in very rare cases.

QUESTION:  What it means the "Can not open video codec"  error when rendering?  

Answer: It means that Mistika has sent the rendered images to a third party encoder and it returned an error. The exact cause is not defined, but normally this means  one of these:

- An impossible setting, like too high bitrate for the selected codec (typical absolute maximum  is 135000 except for lossless codecs), or too high resolution for the selected codec  (4k for h264, 8k for h265, also depending on the GPU generation). In the case of h24/h265 (HEVC) Nvidia supported formats for each GPU model  are documented here:

- In the case of NVidia h264/265 (HEVC) hardware codecs, please note that the hardware encoder is a limited resource that can not be multi-tasked (most GPUs only have one encoder). So it can also happen if another application that can use the NVidia encoder is opened at the same time

- Another possibility is if you have defined an invalid destination path (not existing, no write permissions for it, disk already full, or other similar reasons)

QUESTION: Why does my render in the Mistika VR Evaluation version have a watermark? 

Answer:  Starting from version 10.8.3, the Mistika VR Evaluation version includes a watermark on rendered content. This watermark is a feature limitation specific to the Evaluation version, allowing users to explore the software's capabilities before making a purchase. To access watermark-free renders, consider upgrading to the full version of Mistika VR.

If you have acquired and activated a paid edition of Mistika VR, but your renders still display a watermark, it indicates that you are still using the previous activation code from the evaluation period. To resolve this, please deactivate the old code and activate the new one that you received via email upon completing your purchase. For additional information, please refer to the licensing articles.


Note: The other difference between evaluations and purchased subscriptions  is that only the second  ones permit to use the new features of "Beta" versions still in development (open beta program)  

QUESTION:  How to animate parameters (edge points and others)

Answer:  Since 8.7.7 version most parameter can be animated. To control the animation, open the contextual menu for the parameter by right clicking on it. You will see these options:

  • Default Value: resets the parameter. If it was animated, animation will be disabled.

  • Add Key Frame: A keyframe will be added at the current frame position. Animation is enabled for the parameter if it wasn't enabled already. From now on, any change to this parameter will automatically insert a new keyframe at the current time (if there wasn't a keyframe tere already).

  • Remove Key Frame: The keyframe at the current time will be removed. If it was the last keyframe left, then the parameter will become non-animated, going back to the default value..

  • Remove Animation: Animation will be disabled, all keyframe removed. But in this case the current value of the parameter will be kept as non-animated value. 

The numerical value of the parameters is color coded:

  • Gray number means the default, unmodified value.

  • White number means the current value was set by user, but it is not animated. Please note that if it is set manually  to the default value it will still be white, as it was set by the user. To completely remove the user actions use the “default value” command.

  • Green number is a keyframe value set by the user

  • Light Blue number is an interpolated value between keyframes.

In th time bar, the keyframes will be shown for the selected parameters as green marks. Meanwhile the animated segments will be drawn as light blue segments, thus corresponding to the same color hints.

QUESTION:  When I  render a movie in MistikaVR and watch it in other application it is not shown as 360 video

Answer:  You may need to inject the 360 spacial metadata in the movie file.  Just select  'Inject Spatial Media Metadata'  in the render panel (available for render formats supporting it, not all formats support it).

QUESTION: What is the Bake In Output Camera ?

Answer: Whenever you move the horizon, MistikaVR does it by changing the yaw, pitch and roll parameters on the “Output Camera”. The Bake in Output Camera clears this parameters by adding them to the control parameters of the Input cameras instead. The rendered images will be the same, but it can be useful to do it in order to stablish your new preferred horizon settings "by default". For example if you want to experiment and easily come back later to that settings, or if you plan to animate it or do scripting with the metadata files.

QUESTION: How to add a logo or CG  overlay clip in the scene?

Answer: Please read this article

QUESTION: How to stitch a cube rig?

Answer:  The cube configuration requires a bit of puzzle solving, as the files do not come in any speciffic order. If you don't have a preset for your particular cube rig , start applying  Omni or Freedom360 preset, and try to join in one corner the three views of the leg. The other three views tend to be simpler. Depending on the camera configurations, either all fits into the place, or one camera will remain flopped (180 degrees rotated). In such a case, next time use the OTHER one from the above mentioned presets (Omni or Freedom), or simply add 180 degrees to the last camera's roll value. Once you have everything adjusted  save a preset for your rig.

Here it is a good tutorial using a cube rig;  https//

QUESTION: How to export the audio track with the Insta Pro 2 cameras?

Answer:  The Insta Pro 2 saves the audio in the file called origin_6_lrv.mp4 file directly. If the user needs to export with the high-resolution files the audio track, the next workaround is the one to proceed to: 
1. Import the high-resolution files. 
2. Do the stitching jobs. 
3. Once the stitching is ready, import the origin_6_lrv.mp4 file as camera number 7. 
4. Select in the clip stack the origin_6_lrv.mp4 file. 
5. Go to 'Input camera' control tab and do click on 'enable' to turn off the camera. As you can see the audio speaker can be selected in the clip stack so now you are ready to export your sequence with audio. Ensure the speaker in the clip stack is ON in the origin_6_lrv.mp4 file.

QUESTION: How to apply Flow State Stabilization to Insta Pro2 and Titan cameras?

Answer: To make this feature work properly, it is compulsory that the project is set with exactly the same frame rate of the media. Otherwise, it will not work. Secondly, it is recommended that the folder structure provided by Insta360 Pro 2 and Titan cameras is respected. Then, in Mistika, after the media is imported and the stitching is finessed, the stabilization is imported by clicking on Stabilize > Import Stabilize Metadata.

Recent questions from advanced users:

QUESTION: How to do a "vertical flip" to a camera

Answer: There is a parameter for a horizontal flip in the "Source Camera" tab. The need for a vertical flip is very rare on VR rigs and it does not have a specific button. However it is still possible: To do a vertical flip, rotate the camera 180 degrees by adding 180 degrees to its roll value, and then use the horizontal flip.

QUESTION: What to do if there is camera "drifting" after stabilization?

Answer: After stabilization apply a few keyframes to make it centered. Drifting is usually smooth and consistent, so it is easy to counteract by creating a few keyframes to the output camera orientation.

QUESTION: After stabilization the front camera looks stabilized, but the others are still shaky

Answer: In the "prioritize front" mode, MistikaVR concentrates on stabilizing the front of the view, at the cost of getting less steady the rest of the shot. The rationale is that, like in your rover shot, the viewer will typically look forward, so priority should be there. 

The impression that only front looking camera is stabilized is likely produced by effects of "rolling shutter" in some camera rigs: The sensors, even if pixel-level synchronized, scan the scene looking at different directions, so the same point of the scene will be seen in a different time from different cameras. MistikaVR will try to warp together the views distorted by the rolling shutter effect. In a shaky shot, this is not apparent, however, once you stabilize the shot while concentrating on one part of the scene, the rolling shutter distortions will make the other cameras "dance around". 

In a situation like this it can be recommended to switch off the "prioritize front",  as the errors will be distributed more evenly with a similar amount of shake left in front, as well as in the back. 


QUESTION:  Why there are no "render only licenses" for MistikaVR ?

Answer:  In our current license model, the main difference between using the evaluation version and a subscription is the capability to render.  So render is the main capability that you really pay for, not the GUI.  Which means that  it does not make sense to have a "render only" license. However, if you need to feed a big render farm with  many render nodes you can ask our sale representatives for volume discount. 

A typical case is this other one: Let's suppose that you have a big renderfarm used for many products, and  you only need to use some of them with MistikaVR  but you don't know which ones will be available each time.  For this case, what you can do is to  activate all your MistikaVR activation codes on a same system (same license server), then tell all the render nodes to use that license server, and set the policies of your render manager (Smedge, Deadline..) to retry periodically until the renders are complete.  In that way, even if all licenses are busy for a while, when one license becomes available  the next render node to try a MistikaVR job will succeed to get it, so all the render jobs will be done at some point.   

QUESTION:  What is the difference between "Camera Default Offset-X / Offset-Y" and "Camera Input Offset-X / Offset-Y" ?

Answer:  The default Offsets, focal length and lens distortion values are used only for the cameras where the corresponding values are left unmodified. That means that once you set these values manually, or by using "improve offsets" tool, the Defalut OffsetX/Y values are ignored from that point.  

Normally, the only common situation where  default offset values are used is when you import from PTGui and you did not enable there the "individual cameras offsets" control.  That is normally a good idea only if you are not really using a rig with multiple cameras, but rotating a single camera, panorama-still style

QUESTION:  How to hide (or show) the color picker tool?
Answer:  This is a standard pick color  tool in all Mistika applications to get the exact color values of a pixel. It does not have an icon because it is rarely used in MistikaVR,  but it may be activated inadvertently. The hot key is  Ctrl + Alt + Right Mouse button,  use the hotkey with the pointer  out of the image area to hide it

QUESTION:  How to use <Vertical Offset> for adjusting the vertical balance

Answer: When you shoot in a common room, the floor and the ceiling are at approximately the same distance, and the scene is “vertically balanced”. However, in most scenes, the floor is much nearer than the ceiling (or, directly, the sky). 

With such an “vertically unbalanced scene”, most, if not all, calibration methods (APG, PTGui, cameras built-in auto-calibration, Mistika's “Improve” functions) will try to converge cameras both at ceiling and floor at the same time. To achieve this, the calibration will result in all cameras pointing at the horizon with a slight pitch downwards, with the horizon resulting 1 or 2 degrees up from where it should be, (depending on the image content). 

You can compensate this typical calibration imprecision by using the “Options->Vertical Offset” parameter, to place the horizon where it should be, at the center grid line of the 360x180 degrees LatLong stitched result. 

Recent questions about Stero3D:

QUESTION: My mono edge points don't work in stereo, and stereo edge points don't work in mono.
Answer: Edge points, when created, can be either stereo or mono. but Stereo render does not see the mono points and the  mono render does not see the stereo points. 

QUESTION:  What parts of the image can have a good  stereo effect. 
Answer:  Stereo can only work when the ring of lenses is flat. Pitching the camera would make stereo impossible. Ceiling and nadir are 2D,  and the central  part is where the Stereo3D is at best. Also, stereo pixels need to be visible in 2 cameras, so as a difference to mono images the sweet spot for stereo shots is typically near the stitching lines. Another key aspect  to obtaining a good stereo stitching is to be sure that the horizon is properly aligned.  

QUESTION:  How to reduce parallax crossover or excessive due to objects in close objects proximity to the camera
Answer:  Parallax crossover is visible in black &amp; white anaglyph as red / cyan lines crossing with each other. When everything else fails, a way to try to save the shot is to increase the feather parameter. This produce the effect to reduce all parallaxes, you can use it to try to bring parallax to an acceptable level

QUESTION:  How to use the AM tool to align vertical parallax ( VR180 Stereo3D)
Answer:  The Alignment Mode tool (AM icon , now represented with cyan/magenta arrows) permits to fix vertical parallax differences, and it is mainly useful for VR180 video. Explained in this article

QUESTION:  Where to find tutorials about Stereo3D adjustments

Answer:  Here there are some tutorials :

Balancing the Convergence

Stereo Edge Points

Stereo Edge point example