Note: This document only covers questions that do not have a dedicated section. For example questions about licenses are not included here, but in the license articles
Recent questions from new users:
QUESTION: The rendered files from MistikaVR are not working when I load them in other application (Premiere...)
Answer: When selecting the render codec in MistikaVR use the default settings, as they are set for maximum compatibility and tested in most common applications. If it works in that way then you can complicate things later
QUESTION: How to select the project resolution? / How to create a new render resolution?
Answer: It is explained in this article
QUESTION: How to reduce text size in parameter boxes ? it does no fit the field size, and the values appear cropped
Answer: MistikaVR uses the system font. In Windows 10 it is configured in the Display Settings. Values over 150% are too big for MistikaVR interface. Also check next point
QUESTION: How to work with 4K+ GUI monitors
4K GUI monitors are not yet fully supported. However there is an experimental technique that may work (there is no warranty)
The main problem with those monitors is that the parameter fields look too small. To solve this you can define an environment variable with this value:
This is planned to be automated in future versions when the GUI is more fine tuned for 4K GUI, but meanwhile you can try that.
QUESTION: How to sync the clips by using their audio tracks ?
Answer: The Align by audio tool has these parameters to adjust (in case it does not work well at the first attempt):
- Search length is the length of the sound sample to compare, centered at the current frame position.
- Maximum offset is how much the cameras may be off sync, to avoid false matches at unlikely large offsets. If you turn on the cameras one by one, think how many seconds may be out of sync, and fill that number as the maximum. Set the current frame at a zone where there is some identifiable noise (clap, people talking), and run the match.
- Sample size - We recommend to let it on default. The audio needs to be split to some bits - "windows" that can be matched - and this is the size of these bits. smaller means more precise on transients, but lower frequencies may get ignored with very small windows. Keep it on 2 - 4 range, these work best. You should probably ignore this field, tweaking it usually does not change the result or make it worst, only in very rare cases.
QUESTION: What it means the "Can not open video codec" error when rendering?
Answer: It means that Mistika has sent the rendered images to a third party encoder and it returned an error. The exact cause is not defined, but normally this means one of these:
- An impossible setting, like too high bitrate for the selected codec (typical absolute maximum is 135000 except for lossless codecs), or too high resolution for the selected codec (4k for h264, 8k for h265, also depending on the GPU generation). In the case of h24/h265 (HEVC) Nvidia supported formats for each GPU model are documented here:
- In the case of NVidia h264/265 (HEVC) hardware codecs, please note that the hardware encoder is a limited resource that can not be multi-tasked (most GPUs only have one encoder). So it can also happen if another application that can use the NVidia encoder is opened at the same time
- Another possibility is if you have defined an invalid destination path (not existing, no write permissions for it, disk already full, or other similar reasons)
QUESTION: Why the render option is disabled ?
Answer: The evaluation licenses do not permit to render. But if you have purchased (and installed) an activation code and the render is still disabled then it means that you are still using the old activation code for the evaluation. Deactivate it and activate the new activation code received by email after doing the subscription. More details in the license articles
Note: The other difference between evaluations and purchased subscriptions is that only the second ones permit to use the new features of "Beta" versions still in development (open beta program)
QUESTION: How to animate parameters (edge points and others)
Answer: Since 8.7.7 version most parameter can be animated. To control the animation, open the contextual menu for the parameter by right clicking on it. You will see these options:
Default Value: resets the parameter. If it was animated, animation will be disabled.
Add Key Frame: A keyframe will be added at the current frame position. Animation is enabled for the parameter if it wasn't enabled already. From now on, any change to this parameter will automatically insert a new keyframe at the current time (if there wasn't a keyframe tere already).
Remove Key Frame: The keyframe at the current time will be removed. If it was the last keyframe left, then the parameter will become non-animated, going back to the default value..
Remove Animation: Animation will be disabled, all keyframe removed. But in this case the current value of the parameter will be kept as non-animated value.
The numerical value of the parameters is color coded:
Gray number means the default, unmodified value.
White number means the current value was set by user, but it is not animated. Please note that if it is set manually to the default value it will still be white, as it was set by the user. To completely remove the user actions use the “default value” command.
Green number is a keyframe value set by the user
Light Blue number is an interpolated value between keyframes.
In th time bar, the keyframes will be shown for the selected parameters as green marks. Meanwhile the animated segments will be drawn as light blue segments, thus corresponding to the same color hints.
QUESTION: When I render a movie in MistikaVR and watch it in other application it is not shown as 360 video
Answer: You may need to inject the 360 spacial metadata in the movie file. Just select 'Inject Spatial Media Metadata' in the render panel (available for render formats supporting it, not all formats support it).
QUESTION: What is the Bake In Output Camera ?
Answer: Whenever you move the horizon, MistikaVR does it by changing the yaw, pitch and roll parameters on the “Output Camera”. The Bake in Output Camera clears this parameters by adding them to the control parameters of the Input cameras instead. The rendered images will be the same, but it can be useful to do it in order to stablish your new preferred horizon settings "by default". For example if you want to experiment and easily come back later to that settings, or if you plan to animate it or do scripting with the metadata files.
QUESTION: How to add a logo or CG overlay clip in the scene?
Answer: Please read this article
QUESTION: How to stitch a cube rig?
Answer: The cube configuration requires a bit of puzzle solving, as the files do not come in any speciffic order. If you don't have a preset for your particular cube rig , start applying Omni or Freedom360 preset, and try to join in one corner the three views of the leg. The other three views tend to be simpler. Depending on the camera configurations, either all fits into the place, or one camera will remain flopped (180 degrees rotated). In such a case, next time use the OTHER one from the above mentioned presets (Omni or Freedom), or simply add 180 degrees to the last camera's roll value. Once you have everything adjusted save a preset for your rig.
Here it is a good tutorial using a cube rig; https//vimeo.com/216736585
Recent questions from advanced users:
Answer: There is a parameter for a horizontal flip in the "Source Camera" tab. The need for a vertical flip is very rare on VR rigs and it does not have a specific button. However it is still possible: To do a vertical flip, rotate the camera 180 degrees by adding 180 degrees to its roll value, and then use the horizontal flip.
QUESTION: Insta360 pro calibration data: What has happened to it in Mistika 8.8.7 and later ?
Answer: The calibration data normally generated with Insta 360 Pro Studio ( the .prj file that goes as a sidecar with the media files ) can now be generated automatically inside MistikaVR (similar to what was already possible for Kandao) , which will simplify the workflow by saving the need to use an extra application and then using the ImportStitch to read the prj file, which is not necessary anymore. Now when using the tools for improving transformations, if the "Use Insta360 Pro Callibration" is selected then MistikaVR will automatically use an integrated Insta360 plugin for creating identical calibration data.
However, MistikaVR own tools are still available as usual, and some users may still want to improve the calibration on its own, so here it is the old document with the tricks and tips:
Note: Some other cameras still come with calibration files in sidecar files: Nokia Ozo ( a .txt file ) and Facebook surround (.json file). They can be imported with the Stitch>Import Stitch (or just drag & drop the file)
QUESTION: Why there are no "render only licenses" for MistikaVR ?
Answer: In our current license model, the main difference between using the evaluation version and a subscription is the capability to render. So render is the main capability that you really pay for, not the GUI. Which means that it does not make sense to have a "render only" license. However, if you need to feed a big render farm with many render nodes you can ask our sale representatives for volume discount.
A typical case is this other one: Let's suppose that you have a big renderfarm used for many products, and you only need to use some of them with MistikaVR but you don't know which ones will be available each time. For this case, what you can do is to activate all your MistikaVR activation codes on a same system (same license server), then tell all the render nodes to use that license server, and set the policies of your render manager (Smedge, Deadline..) to retry periodically until the renders are complete. In that way, even if all licenses are busy for a while, when one license becomes available the next render node to try a MistikaVR job will succeed to get it, so all the render jobs will be done at some point.
QUESTION: What is the difference between "Camera Default Offset-X / Offset-Y" and "Camera Input Offset-X / Offset-Y" ?
QUESTION: How to increase RAM cache for more interactivity and faster playbacks
Answer: MistikaVR will cache the most recently accessed image files in RAM memory. The cached images are represented in the timeline bar with a thin highlight bar. If you scrub trough cached images or playback them then the cached versions are used, thus providing a faster response. The number of cached images depend on the amount of RAM reserved for this purpose, and it is calculated automatically to a safe setting based on your computer configuration. However, if you want to tweak it manually you can do it in the localPreferences.xml file:
(Default location: C:\Users\<user>\SGO/AppData/VR/localPreferences.xml )
Two parameters are involved (they are risky to modify and they can destabilise the system, please make a backup of the file first! ):
- cpuFrames ( maximum number of frames to cache. Let it to 0 for automatic adjustment, or put other value to force it )
- cpuMemKb ( RAM amount, in kilobytes )
Recent questions about Stero3D:
Answer: Here they are: