Yes, that was my plan. Implementation and also UI should be the same just the mathematical operation then is different depending on the selected method.
I guess that’s because the “Invert” buttons are currently not there. I will add them. Then it should behave the same way.
You might have used a source with an integer pixel format. Up to now I had just implemented that for a float pixel format to try it out. I will add that later. But I guess if you set up an OCIO configuration, any source will be converted to float anyhow - in case you want to try it out.
Up to know I didn’t compare it in detail to the implementation of the compositor, but that’s a good point. It’s a pity that the compositor and the video editor do not share code.
Yes, if I did it right, it should be an implementation of the ASC-CDL color correction method.
From how I understand it, if you have set up OpenColorIO and select the right input color transform for the video strip, it should convert the source material to the linear working color space (ACEScg or ACEScc for example). And there the color balance modifier is applied. For output on the screen and for rendered output the result is then converted to the viewing color space e.g. sRGB.
But I am no expert on how color is handled in Blender, so please correct me if I’m wrong.
@JosefR You should reach out to @iss on blender.chat or you could PM him there. He’s the BF dev working on the VSE and can let you know if this enhancement is welcome or not.
That is correct, when you set color profile for strip, it converts image to float if needed and applies the profile to convert footage to scene reffered linear space. I would have to double check if float conversion happens automatically or only when requested explicitly.
Having said that, I saw, that you imported footage as movie strip. Currently we only import movies with 8 bit color depth. Last time I wanted to implement support for this, I think issue was that FFmpeg libraries are not compiled with higher bit depth support.
Yes, there is some code that is technically duplicated in compositor or sequencer. If compositor code is usable for VSE, it could be included in library (see BLI_math_color.h) and shared between both modules.
I think that patch looks reasonable even if code can’t be shared.
I didn’t know this. Any chance to see this fixed in the future? I guess using an image sequence is not really an alternative due to file sizes (at least for 4k footage).
OK, then I will continue the way I started and will send the patch when I think it is done. It should anyway be better to keep the change consistent and not try to rework some code at the same time.
I may be incorrect on this topic, but from what I read this is bit more complicated by fact, that libx264 can be compiled with say 10-bit support, but then it will be able to decode and encode only 10-bit footage, at least that’s how I understand it.
So it’s not (only) FFmpeg that is configured this way. From what I read libx265 can decode and encode various bit depths. Perhaps other codecs too. I would have to do more research to see if and how this is possible to implement.
Didn’t quickly see any specific flags on ffmpeg to enable/disable 10 bit support, but x264 is build with the default settings which enables both 8 and 10 bit support
I will have to re-check this it was long time ago I was looking into this. I remember, that I was getting no image, but I didn’t check decoded buffer directly to see where the issue is.
@JosefR How are things going with your patch? I think the curfew for getting new features in 3.0 is the 22, so you might consider submitting your patch as soon as possible.
Great. Maybe the description could need a bit more information on how to use it? Like how to get all of the color space options? Why is it necessary to uncheck proxies? Does the button have any effect if ex. proxies are on and the color spaces are not set up like you do?
I disabled proxies because I got heavy color banding when using it. I guess the proxies convert the 10-bit DNx source footage to 8-bit. But in general, some lines about how to use it is a good idea, this would be needed for the documentation anyway.
Cool. If you’re into color grading, then at some point it could be interesting to discuss a suggested workflow for color grading with you, both for documentation and also for mapping out the current shortcomings in the toolset and the UI/UX.
I think I might not be the right person for that. I just tried to find a tool on Linux for editing some footage from my Lumix S5 camera and therefore started reading about the topic a bit. From my perspective at least, the technology behind it is only half of the story. A lot really is UI and how film- or human-eye-like the controls behave and so on.
But I guess the most important thing in Blender would be improving the responsiveness of the controls. Without any GPU acceleration (the integration of HIP might be a chance, but you can do all that also with OpenGL ES) working with 4K footage is currently really painful, even on a fast computer.
@JosefR Congratulations on having your patch committed. I noticed two things:
Opening an existing .blend and switching to the color processing of your patch makes all of the colors black(and the preview black). In a new file the default values seems okay.
Comparing with the color node the offset value is black, but in the modifier in the sidebar it’s white. Does this make sense and everything is okay?
I guess I have missed to set some correct default values somewhere. I will try to find where this has to be done.
I did that to allow applying negative offset values and having the slider in the middle position by default. Otherwise, you would have to select the “invert” button to get a negative offset.
Technically it is more correct to have black there by default because this is the rgb value that will be used for the offset. I am actually doing that by correcting the value from the widget:
For consistency with the compositor it might really be better to change that. But the compositor is using this “basis” field for offsetting the color value to achieve negative values as there is no “invert” option there. I am not sure which one is the best solution.