There were some ideas early on to achieve that by having something like an Is Viewport node like Geometry Nodes, but that was not approached yet, mainly because the compositor need to support lazy evaluation first, that is, only compute part of the node tree depending on some boolean.
I am working toward supporting lazy evaluation, add boolean sockets, and so on. Once this is done, it should be possible to have something like an Is Viewport node.
The only nodes that are not fully supported are Texture and Render Layers if you use passes in Cycles or other render engines. But that is only for the viewport compositor.
Removing this nodes will still show the error innthe scene info when rendering
Can you provide more details? Where is the scene infor exactly when rendering?
I’ve got 3 seperate render layers mixed together but they don’t appear while in realtime viewport compositing. Is this the same thing you’re referring to “not being fully supported”? If so any idea when support will be added? Thanks
@OmarEmaraDev
Hi, is there any plan to add motion blur in compositor transform/translate node? Quite an important and crucial feature missing from compositor for very long. It would be great to have this feature.
Thanks.
@OmarEmaraDev Great work on the recent new nodes that were added! I have been posting some viewport examples on twitter (@redjam9) the last few days, and getting questions about the uvs from the new image info node.
Why does the uv map node require a 1 in the z component? I would have thought that a default 0 would have worked, but I am guessing this is a leagcy thing since it was designed to use a uv pass originally?
Is there a reason that the uvs from image info node are in a -1 to 1 range and not 0-1 as would be expected? I find that most of the time you need to remap them. Not a big deal, but curious about the decision.
Is the end goal to have a Vector2 data type in the compositor, and presumably there would then be a Vector2 math node for this data type to manipulate uvs?
Cheers!
For anyone wondering what I am talking about, there is a new image info and vector math nodes, so you can now do typical shader type setups in the compositor for custom fx.
The halftone dot network as an example:
You are correct that this is a consequence of the design of the UV Pass that we get from Cycles. In particular, the UV Pass that Cycles give us is a 2D vector with an Alpha channel, that is, XYA. The alpha channel is premultiplied to the sampled colors, so if it is zero, your output will be zero. The reason why Cycles write an Alpha is to achieve some sort of antialiased edges, because edges will have semi-transparent alpha.
So it is not a legacy thing. But we are aware about how unitutive it is, and we plan to introduce a dedicated node that samples images without such caveats.
Zero centered coordinates helps with textures and procedural context that are also zero centered. For instance, you can create a vignette by simply using a Gradient texture. Furthermore, scaling is also around the center, which I think is desirable. For other textures, they don’t really have a center, so having [-1, 1] range seems okay.
There is already a Vector2 data type internally in the compositor, and yes, we plan to extend the vector socket to support different dimensions, not just in the compositor but also in other node systems.
@OmarEmaraDev thanks for your detailed reply!
Ok, that makes sense now that I know the XYA encoding of the UV Pass. Good to know.
With regards to the 0-1 uv range, I think most artists learn to create shaders using a 0-1 range on a standard mapped mesh plane, so It becomes second nature and the math becomes easier. I do a lot of gamedev shaders and game engine screenspace is generally 0-1. I think maybe Unity has -1 to 1 for its clipspace uv but it has 0-1 for screenspace. Godot uses 0-1, but not sure about Unreal.
All of the examples I posted work better in the 0-1 range, but I agree that some shaders like vignettes or circles are better with the -1 to 1 range. Personally, I would use the 0 -1 most of the time, so would almost always need to be put the 2 extra nodes in. You should definitely ask some other artists to get a consensus on this!
I am curious if the procedural textures like the noise and gradient that you are demoing are likely to be added in the 4.5 dev cycle? Would also love to know if there are future plans for some sort of repeat/looping zone like geonodes and the npr shaders use?
Yes, we have plans for that. Though not sure about the timeline yet.
I guess once we land texture nodes, we can ask for feedback on the matter.
Looking at your example above, I am not sure remapping is needed in the grid of circles for instance. You would just need to divide your count by two. Furthermore, for the UVs, one should note that the texture coordinates output is also not [-1, 1] in both directions, it is [-1, 1] in the greater dimension and a smaller range in the smaller dimension. So you will probably have to do remapping to support non-square aspect ratios. Just mentioning this so that we consider all cases.
No wonder I have been having issues with get non square dimensions to work, as I thought it was -1 to 1 on both axes. Does this mean that if the screen res is 1920x1080, it is -1 to 1 on the x and -0.56 to 0.56 (approx) on the y? With a 1080 x 1920 res it would be the other way around, with -1 to 1 on the y and -0.56 to 0.56 on the x?
Yes, your assessment is correct. The reason for this is to not have textures stretched when using the texture coordinates. But getting a [0, 1] texture coordinates is as easy as dividing the pixel coordinates by the resolution:
Thanks for the response.
Yes, It would be calculated when transform is made.
I have done a hacky version with a vector blur node with some math node combined as XYZ vector plugged to speed and some value node to translate and do the math. It is hacky. Works only for camera shake effect.
A proper 2D motion blur for transform node would be life changing!