Note: This document only covers questions that do not have a dedicated document in this section. For example questions about licenses are not included here, but in the license articles


Recent questions from new users:



QUESTION:  How to reduce text size in parameter boxes ? it does no fit the  field size, and the values appear cropped  

Answer:  MistikaVR uses the system font. In Windows 10 it is configured in the Display Settings. Values over 150% are too big for MistikaVR interface



QUESTION:  How to sync the clips by using their audio tracks ? 

Answer: The Align by audio tool has these parameters to adjust (in case it does not work well at the first attempt):


- Search length is the length of the sound sample to compare, centered at the current frame position. 


- Maximum offset is how much the cameras may be off sync, to avoid false matches at unlikely large offsets. If you turn on the cameras one by one, think how many seconds may be out of sync, and fill that number as the maximum. Set the current frame at a zone where there is some identifiable noise (clap, people talking), and run the match.


- Sample size - We recommend to let it on default. The audio needs to be split to some bits - "windows" that can be matched - and this is the size of these bits. smaller means more precise on transients, but lower frequencies may get ignored with very small windows. Keep it on 2 - 4 range, these work best. You should probably ignore this field, tweaking it usually does not change the result or make it worst, only in very rare cases.




QUESTION:  What it means the "Can not open video codec"  error when rendering?  

Answer: It means that Mistika sent the rendered images to a third party encoder and it returned an error. The exact cause is not defined, but normally this means an impossible setting, like too high bitrate for the selected codec (typical absolute maximum  is 135000 except for lossless codecs), or too high resolution for the selected codec  (4k for h264, 8k or 4k for h265, also depending on the GPU generation). But please note that these values are continuously evolving, so it may be different when you read this document.


In the case of NVidia hardware codecs, please note that the hardware encoder is a limited resource that can not be multi-tasked (most GPUs only have one encoder). So it can also happen if another application that can use the NVidia encoder is opened at the same time




QUESTION:  How to animate parameters (edge points and others)

Answer:  Since 8.7.7 version most parameter can be animated. To control the animation, open the contextual menu for the parameter by right clicking on it. There will be these new options:


  • Default Value: resets wholy the parameter. If it was animated, animation will be disabled.

  • Add Key Frame: A keyframe will be added at the current frame position. Animation is enabled for the parameter if it wasn't enabled already. From now on, any change to this parameter will automatically insert new keyframe in current time if there wasn't a keyframe tere already.

  • Remove Key Frame: keyframe at the current time will be removed. If it was the last keyframe left, the curve will become non-animated.

  • Remove Animation: Animation will be disabled, all keyframe removed, however the current value of the parameter will be kept as non-animated value. 


The numerical value of the parameters is color coded:

  • Gray number means the default, unmodified value.

  • White number means the value was set by user. It may be set to the default value and still will show white as it was set by the user. To completely remove the value set, use the “default value” command.

  • Green number is a keyframe value set by the user

  • Light Blue number is an interpolated value between keyframes.


The keyframes will be shown for the selected parameters as green marks, meanwhile the animated segments will be drawn as light blue segments, corresponding to the above color hints.



QUESTION: What is the Bake In Output Camera ?

Answer: Whenever you move the horizon, MistikaVR does it by changing the yaw, pitch and roll parameters on the “Output Camera”. The Bake in Output Camera clears this parameters by adding them to the control parameters of the Input cameras instead. The rendered images will be the same, but it is useful in order to stablish your preferred horizon settings "by default". For example if you want to experiment and easily come back later to it, or if you plan to animate it or do scripting with the metadata files.



Recent questions from advanced users:



QUESTION:  Insta360 pro calibration data:  What has happened to it in Mistika 8.8.7  ?

Answer:  The calibration data normally generated with Insta 360 Pro Studio ( the  .prj file that goes as a sidecar with the media files   ) can now be generated automatically inside MistikaVR (similar to what was already possible for Kandao) , which will simplify the workflow by saving the need to use an extra application and then using the ImportStich to get the prj file, which is not neccesary anymore.  Now when improving transformations,  if the "Use Insta360 Pro Callibration" is selected then MistikaVR will automatically use an integrated Insta360 plugin for creating identical calibration data. 


However, MistikaVR own tools are still available as usual, and some users may still want to improve the calibration on its own, so here it is the old document with the tricks and tips:

https://docs.google.com/document/d/1s-Rf4FlXANfyl0ng0r6dUVRTUfUmdnOu2RnUXRFyccE/edit


Note: Some other cameras still  come with calibration files in sidecar files:  Nokia Ozo ( a .txt file ) and Facebook surround (.json file). They can be imported with the Stitch>Import Stitch (or just  drag & drop the file)




QUESTION:  What is the difference between Camera Default Offset-X & Offset-Y and Camera Input Offset-X & Offset-Y ?

Answer:  The default Offsets, focal length and lens distortion values are used only for the cameras where the corresponding values are left unmodified. That means that once you set these values manually, or by using "improve offsets" tool, the Defalut OffsetX/Y are ignored from that point.  The only common situation where  default offset values are used is when you import from PTGui and you did not enable there the "individual cameras offsets" control.  That is normally a good idea only if you are not really using a rig with multiple cameras, but rotating a single camera, panorama-still style




QUESTION:  How to increase RAM cache for more interactivity and for realtime playbacks 

Answer:  MistikaVR will cache the most recently accessed files in RAM memory.  The cached images are represented in the timeline bar with a thin highlight bar. If you scrub trough cached  images or playback them  then the cached versions are used, thus providing a faster response.  The number of cached images depend on the amount of RAM reserved for this purpose, and it is calculated automatically to a safe setting based on your computer configuration. However, if you want to tweak it manually you can do it in the localPreferences.xml  file:

 (Default location:  C:\Users\<user>\SGO/AppData/VR/localPreferences.xml )


Two parameters are involved (they are risky to modify and they can destabilise the system,  please make a backup of the file first! ):  

-  cpuFrames ( maximum number of  frames to cache. Let it to 0 for automatic adjustment, or put other value to force it )  

-  cpuMemKb ( RAM amount, in kilobytes )