Images for Blender 4.0 Release Notes and Manual

Great, thanks so much.

I’m afraid not, what we need them for could be considered commercial use. The terminology here is confusing, I think a restriction on commercial use would make it not public domain.

1 Like

One more question: Can .exr files be in other color spaces (like bt.2020, AP1 or P3)? for example to demonstrate that AgX handles wide gamuts as well. I have a scene that uses the bt.2020 color space and I would like to share if there are no problems with it

1 Like

Can you please share the blend file?
I doublt the fire shader itself didn’t take advantage of the smoke sim temperature data whilst just assigning an arbitrary color to the emssion color.

The fire shader doesn’t directly use the temperature data to control black body emission. Instead it passes the temperature data through a colour ramp node to give the creator more control over the colours, while making it less realistic, and possibly make the effects of Filmic worse.

This is node tree for the smoke + fire material directly from Intel.

2 Likes

Modified this scene by LRosario. With additions of Blend Swap | LB Desk, Blend Swap | LB Chair, and Blend Swap | Mac Pro 2020 & Apple Pro Display XDR.

All are 30-minute equal-time renders.
path tracing

diffuse product MIS path guiding

re-sampled importance sampling path guiding

no denoise



10 Likes

Yes, that’s possible.

This is nice. I think for the release notes we want a version without the denoising,

1 Like

(I edited my previous post to include the noise versions for that scene.)

Default Chiang to default Huang in the hair example blend. The consistency of Huang is really nice. Chiang seems to have a different color or reflectivity for every angle.






20 Likes

Some images have been added to the release notes now. The most important thing we are missing still is images for light and shadowing linking.

The Principled BSDF manual page has also been updated:

8 Likes

Any use case you’d want shown in particular? Here’s one showing an environment being lit by one light while the character is being lit by three.

2 Likes

which plugin you use for hairs ?

That’s just vanilla Blender. The file is from the demo files page here: Demo Files — blender.org

1 Like

I think there could be one more example about the new “Huang” hair shading model that compares the results seen against the light as described in the release notes.

something like the example from the paper:

2 Likes

I’m not sure the image shows the obviousness of the room being lit by 3 lights. Perhaps if there were two characters, each with their own light.

1 Like

Yes, or a scene with some other objects in it, with only the character having a strong rim light so that it stands out. I think that would help clarify it, perhaps with two renders with/without light linking side by side.

1 Like

Note: Huang is compared to the relatively ancient d’Eon for that image. Chiang is already a much better model than d’Eon. However, Huang still dumpsters Chiang in noise, speed, and consistency.

I find that Chiang really struggles in backlighting scenes,



This 50minute Chiang render seems comparable to the 5minute Huang render


Another angle (this one got compressed quite heavily, uncompressed here 2 Chiang vs 2 Huang - Album on Imgur)


5 Likes

I suppose you didn’t change the roughness when switching between the two models? If you want a fair comparison of the two variants, you might want to tune the roughness of the Huang model lower, then you should see that it has a different color for every angle too. This is expected from hairs because the scattering come from different lobes/went through different path lengths. The variation from different angles is actually a characteristic/selling point of the newly supported elliptical cross-section. This poster might be more clear: https://dl.acm.org/action/downloadSupplement?doi=10.1145%2F3532719.3543236&file=poster.pdf

At the core the d’Eon model is not that ancient compared with Chiang. The improvement of Chiang is introducing the direct coloration, recovering the missing energy, and using near-field model instead of far-field. The first two points are irrelevant here, and since both d’Eon and Huang are far-field models, it makes more sense to compare those two.
A near-field model works well when the roughness is high and is faster, otherwise it would look quite noisy as you show in the images. The d’Eon model does not have this problem.

3 Likes

Equal sample, roughness of Chiang is 0.4 while Huang is 0.2.




4 Likes

Modified this vehicle by LRosario.

Using light linking to artistically light a vehicle without having to meticulously arrange and shape lights so they don’t create needless reflections in other areas. (uncompressed Light linking vs none - Album on Imgur)


9 Likes

Completely agree.

The best example scenes for “what does this feature do” are not necessarily wonderful artistic works to be admired. Sometimes 2 spheres on a floor are better than a scene with so many dramatic things going on, that the reader can’t just cleanly process what the point is.

1 Like