I can confirm the latest Cycles X experimental branch “June 1st” is working fine on my side without me doing anything other then downloading/and running it.
(Using the barbershop test scene both on the CPU and the GPU).
Hi this is the new build of today (and even this doesn’t show nothing) and with debug I get this info:
I0602 12:19:15.859195 15751 blender_python.cpp:193] Debug flags initialized to:
CPU flags:
AVX2 : True
AVX : True
SSE4.1 : True
SSE3 : True
SSE2 : True
BVH layout : EMBREE
CUDA flags:
Adaptive Compile : False
OptiX flags:
Curves API : False
Debug : False
OpenCL flags:
Device type : ALL
Debug : False
Memory limit : 0
mtree updater verbose is enabled
mtree Updater: Read in json settings from file
register_class(...):
Warning: 'mtree_settings_panel' does not contain '_PT_' with prefix and suffix
loading in background
<bpy_struct, WindowManager("WinMan") at 0x7f4ff2491008>
And when the render start, I get also this repetitive warning in console:
WARN (bpy.rna): source/blender/python/intern/bpy_rna.c:1499 pyrna_enum_to_py: current value '-1' matches no enum in 'SpaceNodeEditor', '(null)', 'tree_type'
Would the new engine open up the path to particle/ instance specific shader displacement?
Aside from object transformations instances are as static as it gets. Shape keys aren’t viable large scale (at least i guess) so i hope that this might find a place during rendering.
Hi, I am out of ideas now, the only thing you can check is:
In preferences > system you have to click at least once to the device you want to render on. I have to click on Cuda or Optix to get it enabled in render settings.
May it is the same for CPU?
Is CPU grayed out?
Cycles X started out good on the GPU for me, on a space scene with a Spaceship that had plating with an emission shader in between the plating and a planet in the back ground with plane generated stars from alpha transparent shaders with an emission shader which was blended into a 16k altered space HDRI, took about 50 seconds at 1000 passes in 1080p which was its fastest rendering time on two ASUS ROG Strix 1080 Advanced cards, and the CPU was about 2 minutes and 45 seconds. Which was an 18 core Intel X series processor, I believe its exact name is the Intel 7980 XE. But, that was at first, 2 days later I come back to it and it just randomly began performing much differently. GPU would take 23 minutes and CPU would take 32 minutes. It would not move through passes very fast at all. But, the old Cycles or the current rather, I can consistently receive 55 seconds to a minute and 15 seconds on the CPU and for the GPU I receive about 1 minute and 10 seconds to 1 minute and 30 seconds. Until it is stabilized and more of a guarantee like the current Cycles, I will definitely be sticking with the current version of Cycles.
Faster, but… Looks like the latest build (a59c17e3f2f1) is broken. I’ve had all sorts of glitches and bugs… from overexposed pictures and frames rendered on top of other frames to glitches like this one (denoising in the viewport).
If I had to guess, I would say something wrong with the buffer. Disabling ‘Persistent Data’ solves the problem with overexposed images and frames rendered on top of other frames. Besides that, when I enable denoising, sometimes I can see two additional frames on the bottom part of the screen as if I were debugging and getting access to the buffer content. Maybe that’s unrelated, but that’s my guess.
These scenes may of been modified to remove features the Cycles-X branch doesn’t support or maybe they’ve been left “as is” so features being re-implemented can be tested. I’m not entirely sure.
In theory checking performance in the scene created by @JohnDow will give you a general idea of the performance characteristics of each version of Cycles-X, but for the best understanding of how each change impacts performance, you should use multiple scenes with various different aspects in them, like the scenes listed above.
As a side note. Testing performance between Cycle-X version is mostly for you own benefit. As stated above, Brecht and Sergey do regression testing on their own computers and if a huge performance loss occurs, they probably already know about it. However, performance testing with your own computer does have one benefit, understanding how changes affect performance on various pieces of hardware. At the moment Brecht and Sergey SEEM(I’m just guessing here) to do regression testing at this current point in time using an RTX 6000 and RTX A6000. Regression testing with this hardware does not allow them to pick up on performance regressions that may occur on older hardware like the GTX 10 series (Pascal architecture) or GTX 900 series (Maxwell architecture) or older.
As such, I would probably avoid making frequent posts here with benchmarks comparing different changes in Cycles-X performance unless you notice something out of the ordinary in your testing. However, this is my own opinion. The decisions on what really happens comes down to forum moderators and Brecht and Sergey.