Additional considerations about RGB / YUV conversions, level ranges. and levels legalisation

This is an important matter that every user should understand. There are no magic buttons to avoid these issues, and almost every user will have to deal with a problematic project in some occasion. Only a good understanding about what is going on will bring you out of those situations.

Problem symptoms: "My image levels are clamped in the video out...", or "There is a lose of contrast in the video out...", or "I have delivered illegal levels .."  


To start with, RGB <->YUV conversions happen more often than people think. For example,  even if all your footage is RGB, and  you only render to RGB,  and the projector or display is RGB,  if your SDI transmission signal is set yo YUV  then RGB/YUV conversions will be involved. And there are many other cases like this.  

Note: The following explanations use 10bit notation. A 10bit signal can represent values from 0 to 1023, but not all of them are used for image data, and there can be different interpretations about what they are.

- In the case of YUV video signals, only one standard exist. The "legal range" goes from 16 to 940 (in 10bit notation). In some cases the rest of the values between 0..1023 can still be used for image data (except the 0..3 values reserved for video/sync signals), but in regards to the video standards those extra ranges are always considered negative values (Superblacks) and over 100% (Superwhites). In general, those extra ranges will be clamped on Television broadcasts (as they typically reserve these ranges for their own use), but are still often used in post production workflows depending on each particular need.

- But in the case of digital RGB formats, two "standard" have become to exists. The fact is that digital RGB video signals use the complete 0..1023 range, which is known as "Data levels". But once the VTRs and brodcasters started to support RGB signals they also defined a "RGB Video Levels" pseudo-standard, with a legal range and an extra range similar to YUV.

Please note that the RGB signal does not change at all, only the interpretation of what is legal and what is not.  To make it more confusing, in file based workflows a particular media file (DPX, .mov,...)  does not necessarily know what standard was used to produce it.  Only the person (typically "the client") who made it knows if the extra ranges were meant to be displayed or if they were  let there just to have some extra ranges for post-production purposes. 

And to make it even worst, sometimes not even that person knew what he was doing...

For that reasons, a wrong type of conversion  can produce either a perceived "lose of contrast"  (when incorrectly mapping white point to 940 and black point to 16 when data levels are required later ) or either  a "clamped levels"   (when incorrectly mapping 16->0 and 940 -> 1024  and later send it  to a video signal clamping to legal levels  ). 

But please note that all this depends on the interpretation. A same type of conversion can be perfectly correct in some cases and totally wrong in other cases, depending subjectively on what was on the source images and what standard has been requested to be delivered. These are human interpretations and Mistika does not have any way to change the type of conversion that is expected automatically.  Then, what is important to understand is how Mistika works, as you may need  to make the decisions when converting to YUV or RGB standards, and also from/to HDR internal processing.

In all this process, an important tool are the Mistika vectorscopes, which can also let you to see the exact values on certain pixels (Ctrl+Right click to pick values in the image). But a vectorscope representation only tells you that there are out of range values, but not if they contain meaningful content. So you will need to use the pick color to check individual pixels and to understand what is going on.


CONSIDERATIONS WHEN CONVERTING BETWEEN HDR, RGB, AND YUV


To start with, it is important to understand that Mistika always work in HDR space (RGB HDR 32 bit floating point). HDR processing does not use any video standard, instead it support all possible values from minus infinite to infinite by using 32bit floating point representations. If the displayed black point and white point values are refer as 0% and 100%, the HDR space can represent values like -9000% superblack, 160% superwhite , 10000% superwhite, and so.

NOTE: in this document, the term "HDR" refers to Mistika internal processing (an abbreviation for "RGB HDR 32 bit floating point"), here we are not talking about fancy "HDR displays", "HDR curves", or "HDR cameras" at all.

HDR processing also means that all values coming from the source images are always preserved when applying effects, even if they are negative or over 100%. Inside Mistika, ranges are only range-converted or clamped when the user do it on purpose, either by using specialised effects with that capabilities (ColorGrade, LUT, Legalise, RGBLevels, ACES..) or when rendering to specific formats or video signals with limited bit depths.

Once the internal HDR space is converted to non HDR formats ( either in the video output or in a render to either non-HDR RGB formats or YUV formats ) the following will happen:

- When Mistika needs to go from internal HDR to YUV, it will always map the 0% to 100% HDR values into the YUV legal range (16..940), and if there are out of range HDR values a part of them will be mapped into the superwhites and superblacks of the YUV format accordingly. (Please note that a YUV signal provides some "HDR" capabilites as per the extra ranges explained above, but not the unlimited values that are possible in HDR space).

If you do not want to lose extreme levels, some effects provide soft-clip capabilities or special HDR curves that can map all the out of range values into the limited out-of-range space of the YUV signal

- As a difference, when Mistika needs to go from internal HDR to standard RGB, it will always map the 0 to 100% HDR  range into the RGB "data" levels (0..1024 for 10bit RGB), and if there are HDR out of range values they will be cropped by default.  If what you want is a conversion to "video levels",  then you can use the RGBlevels effect for that, which will map 0% to 100% into 16..940 RGB levels, thus also mapping some of the HDR out of range values into 0...15 and 941...1023.  In this way you could preserve a good part of the out of range potential values  as in the YUV case. 

But please note that the RGBLevels only works from RGB to RGB, so it needs to be done before converting from RGB to YUV, or after converting from YUV to RGB.

To summarise, the conversion between RGB and YUV will map 0..1023 RGB range to/from 16 to 940 YUV range (which is the most common need by the way).  If you want a different result you will need to use RGBLevels effect before going to YUV or after  coming from YUV.

A particular case is the delivery format for broadcasters, which in general require all deliveries to be in legal range with no pixels using the out of range values. For this purpose, the most typical tool is the legalise effect (although there are others), both for rendering "legal" deliveries or as a display filter for the video output. In this way, you can continue working safe in HDR inside Mistika.  But in these cases it is recommended to render another master in HDR to preserve all the information that you could need for future versioning.

Finally a totally independent issue is the color interpretation used during YUV<->RGB conversions. In mConfig->MasterFormats you need to select CCIR-601, CCIR-709, or Rec-2020/2100 accordingly with your footage.


CONSIDERATIONS WHEN RENDERING

The only formats supporting HDR are the mistika js HDR format and the EXR formats, and all of them will preserve all the out of range information. Any other render format may crop out range values, at least to a certain extent.

The problem with HDR/EXR uncompressed formats is that they can be too big (16 bits per channel). So there are two alternatives that can be a better solution:

- EXR DWA or EXR ZIP/PIZ. These are "lossless" compressed formats (DWA in most cases, and ZIP/PIZ in all cases), which will typically reduce the size to that of a 10bit format or smaller at the only cost of CPU intensive usage. ZIP/PIZ produce identical values to the original when decompressed, while DWA is faster and more efficient but it may change some values in the less significant bits. However, for the case of images from real cameras these values are considered to be under the signal to noise ratio of the camera, so EXR DWA it is considered safe and it is the fastest format of them. (However, DWA may not be enough for synthetic images using every bit of information, and for cases where a QC workflow forces to provide identical numbers that were received).

- When CPU is an issue and exr can not provide realtime, then a good compromise is to render to mistika .js YUV444, which will keep most of the out of range information (all of it in normal situations), as a difference to RGB444 .js which will always crop all the out of range information (RGB444 .js does not provide any extra ranges).

Becoming to this point, the user may also be tempted to use RGBlevels to convert RGB data levels to video levels and then render to RGB10, thus keeping similar out of range information as YUV.  But the fact is that  this behavior may cause very complicated workflows when loading the rendered clip later.  This is because later if you load the rendered clip and send it to YUV video output, a different  conversion will happen for the rendered file in comparison when how it was before rendering,  thus  producing a "lose of contrast" effect when displaying  the rendered clips.  This act will introduce complicate needs to convert back to DataLevels, but only the clips that have been already rendered (so it can not be used in a display filter). 

For that reason, when using uncompressed formats,  instead of RGB10 what is recommended for intermediate renders is to  render to js YUV444 (10bit). This format has the same size as a RGB10bit file, and as it is a 444 format it does not lose chroma information as it was the case in YUV422. Rendering to YUV444 format permits to forget about all the above considerations, as it will always work well by default.

To summarize, we recommend to never do intermediate renders to RGB10 if it is not for a well justified reason. Unfortunately, RGB10 has been commonly used for intermediate renders just because it was the only "high end" format available 15 years ago when it was the facto standard, but with modern EXR and YUV444 formats there is absolutely no reason to continue using it for this purpose, or at least not "by default" .

A last consideration have to be done about renders to highly compressed codecs for final deliveries (Prores, MPEG4, h264, etc). Those codecs can introduce illegal levels when interpolating values (as they can not represent abrupt transitions, for example when interpolating pixels going from dark to bright and then back to dark they may need to represent it by using a mathematical function that will be also passing by out of range values). In general these out of range values could be cropped later without affecting the image quality at all (as they were not really there before the render). Unfortunately, many "Quality Control" automated systems are poorly designed (or poorly managed) and they may refuse this kind of content without a real reason, just "because it is illegal!"... If you suffer a case like this, you should always check the deliveries in a vectorscope before sending them, and fix it by providing some headroom at the extremes or by softening or blurring sharp luma transitions as necessary