Additional considerations about RGB / YUV conversions and level ranges

This is an important matter that every user should understand. There are no magic button to avoid these issues, and almost every user will have to deal with a problematic project in some occasion. Only a good understanding about what is going on will bring you out of those situations.

Problem symptoms: "My image levels are clamped in the video out...", or "There is a lose of contrast in the video out...", or "I have delivered illegal levels .."  


To start with, RGB <->YUV conversions happen more often than people think. For example,  even if all your footage is RGB,  you only render to RGB,  and the projector or display is RGB,  if your SDI transmission signal is YUV  then a conversion will be involved. And there are many other cases like this.  So there are several considerations to take into account:

Note: The following explanations use 10bit notation. A 10bit signal can represent values from 0 to 1023, but not all of them are used for image data, and there can be different interpretations about what they are.

- In the case of YUV video signals, only one standard exist.  The "legal range" goes from 16 to 940.  In some cases the rest of the values between 0..1023 can still be used for image data (except the 0..3 values reserved for video signals),  but in regards to the video standards those extra ranges are always considered negative values (Superblacks) and over 100% (Superwhites).  In general, those extra ranges will be clamped on Television broadcasts (as they typically reserve these ranges for their own use), but are still often used in post production workflows depending on each particular need.

- In the case of RGB video signals, two "standard" have become to exists.  The fact is that the RGB video signals use the complete 0..1023 range, which  is known as "Data levels". But once the VTRs and brodcasters started to support RGB signals  they also defined a "RGB Video Levels"  pseudo-standard,  with a legal range and an extra range similar to YUV. 

But please note that the RGB signal does not change at all, only the interpretation of what is legal and what is not.  To make it more confusing, in file based workflows a particular media file (DPX, .mov,...)  does not necessarily know what standard was used to produce it.  Only the person (typically "the client") who made it knows if the extra ranges were meant to be displayed or if they were  let there just to have some extra ranges for post-production purposes.  And to make it worst, sometimes not even that person knew what he was doing...

For that reasons, a wrong type of conversion  may either produce a perceived "lose of contrast"  (when incorrectly mapping 1024->940 and 0->16 when  data levels are required later ) or a "clamped levels"   (when incorrectly mapping 16->0 and 940 -> 1024, and later sent to a video format requiring legal levels  ). But please note that it depends on the interpretation. A same type of conversion can be perfectly correct in some cases and totally wrong in other cases, depending subjectively on what was on the source images and what standard has been requested to be delivered. These are human interpretations and Mistika does not have any way to change the type of conversion that is expected automatically.  Then, what is important to understand is how Mistika works, as you may need  to make the decisions when converting to YUV or RGB standards, and also from/to HDR internal processing. 

In all this process, an important tool are the Mistika vectorscopes, which can also let you to see the exact values on certain pixels (Ctrl+Right click to pick values in the image).  A vectorscope rpresentation only tells you that there are out of range values, but not if they are meaningful content. So will  need to use the pick color to check individual pixels


CONSIDERATIONS WHEN CONVERTING BETWEEN HDR, RGB, AND YUV


To start with, it is important to understand that Mistika always work in HDR space (RGB HDR 32 bit floating point). HDR processing does not use any video standard, instead it support all possible values from minus infinite to infinite by using 32bit floating point representations.  If the displayed black and white values are refer as  0% and 100%,  the HDR space can represent values like -9000% superblack, 160% superwhite , 10000% superwhite, and so. 

NOTE: in this text, the term HDR refers to Mistika internal processing and certain file formats, here we are not talking about modern "HDR displays" at all.

HDR processing also means that all values coming from the source images are always preserved when applying  effects, even if they are negative or over 100%.  Inside  Mistika, ranges are only range-converted or clamped when the user do it on purpose, by using  specialised effects with that capabilities (ColorGrade, LUT, Legalise, RGBLevels, ACES..). 

Once the internal HDR space is converted to non HDR formats ( either in the video output or in a render to non-HDR RGB or to YUV ) the following will happen:

- When Mistika needs to go from internal HDR to YUV,  it will always map the 0% to 100% HDR values into the YUV legal range (16..940),  and if there are out of range HDR values a part of them will be mapped into the superwhites and superblacks of the YUV format accordingly. (Please note that a YUV signal provides some "HDR" capabilites, but not the unlimited values that are possible in HDR space). 

If you do not want to lose extreme levels, some effects provide Soft-clip capabilities or special HDR curves that can map all the out of range values into the limited out-of-range space of the YUV signal

- As a difference, when Mistika needs to go from internal HDR to standard RGB, it will always map the 0 to 100% HDR  range into the RGB data range (0..1024 for 10bit RGB), and if there are HDR out of range values they will be cropped by default.  If what you want is a conversion to "video levels",  then you can use the RGBlevels effect for that, which will map 0% to 100% into 16..940 RGB levels, thus also mapping some of the HDR out of range values into 0...15 and 941...1023.  In this way you could preserve some out of range values  as in the YUV case. But please note that the RGBLevels only works from RGB to RGB, so it needs to be done before converting from RGB to YUV, or after converting from YUV to RGB.

To summarise, the conversion between RGB and YUV will  map 0..1023 RGB range to/from 16 to 940 YUV range (which is the most common need by the way).  If you want a different result you will need to use RGBLevels effect before going to YUV or after  coming from YUV.

A particular case is the delivery format for broadcasters, which in general require all deliveries to be in legal range with no pixels using the out of range values. For this purpose, the most typical too is the legalise effect (although there are others), both for rendering "legal" deliveries and as a display filter for the video output. In this way, you can continue working safe in HDR inside Mistika.  And also render another master in HDR to preserve all the information that you could need in the future. 


CONSIDERATIONS WHEN RENDERING

The only formats supporting HDR are the mistika js HDR format and the EXR formats, and all of them  will preserve all the out of range information.  Any other render format will crop out of range values, at least to a certain extent.  

The problem with HDR/EXR uncompressed formats is that they can be too big (16 bits per channel). So there are two alternatives that can be a better solution:

- EXR DWA or ZIP/ZIP. These are lossless compressed formats, which will typically reduce the size  to that of a 10bit format or smaller at the only cost of CPU intensive usage.  ZIP/PIZ produce identical values to the original when decompressed,  while DWA is faster and more efficient but it may change some values in the lease significant bits. However, for the case of  images from real cameras  these values are considered to be under the signal to noise ratio of the camera, so EXR DWA it is considered safe. But that may not be the case for synthetic images using every bit of information, and for cases where a QC workflow forces to provide identical numbers taht were received.

- When CPU or disk bandwidth  is an issue and exr can not provide realtime,  a good compromise is to render to mistika .js YUV444, which will keep a good part of the out of range information (which is normally enough for most real images),  as a difference  to RGB10 which will always crop all the out of range information.

Becoming to this point, the user may also be tempted to use RGBlevels to to convert RGB to video levels and render to RGB10, thus keeping similar out of range information as YUV.  But this may cause very complicated workflows when loading the rendered clip later.  This is because if the video output is YUV, a different  conversion will happen for the rendered file in comparison when how it was before rendering,  thus  producing a "lose of contrast" effect when displaying  the rendered clips. This introduce complicate needs to convert back to DataLevels, but only the clips that have been already rendered (so it can not be used in a display filter). Then,  instead of RGB10, what is recommended for intermediate renders is to always render to js YUV444 (10bit). This format has the same size as a RGB10bit file, and as it is a 444 format it does not lose chroma information as it was the case in YUV422.  Rendering to YUV444 format permits to forget about all the above considerations, as it will always work well by default. 

So we recommend to never do intermediate renders to RGB10 if it is not for a well justified reason.  Unfortunately, RGB10 has been commonly used for intermediate renders just because it was the only "high end" format available 15 years ago, and it has had a big inner trajectory,  but with modern EXR and YUV444 formats there is absolutely no reason to continue using it for this purpose "by default" .

A last consideration have to be done about renders to highly compressed codecs for final deliveries (Prores, MPEG4, etc).  Those codecs can introduce illegal levels when interpolating values (they can not represent abrupt transitions, for example when interpolating pixels going from dark to bright and then back to dark  they can represent it by using  a mathematical function that will be also passing by out of range values).  In general these out of range values could be cropped later without affecting the mage quality at all (as they were not really there before).  Unfortunately, many Quality Control automated systems are poorly designed (or poorly managed) and they may refuse this kind of content without a real reason, just "because it is illegal!"...  If you suffer a case like this, you should always check the deliveries in a vectorscope before sending them, and fix it by providing some headroom at the extremes or by correcting sharp transitions if necessary