Cycles Requests

Could we have an additional toggle/percentage slider in Cycles for “Final Quality in Viewport”?

When working with a lot of instances or particles, sometimes it is necessary to lower the viewport amount of particles to have more fps for navigating.

If I want to see all the particles in cycles realtime rendering, I have to see also all 100% of the particles in the viewport, what makes it very laggy in some cases. (OpenGL and Cycles have to calculate and load the geometry and shaders twice I think.) Even when the instances are set to box, the performance drops.

If I could decrease the number of particles in the VP to maybe 10% I could navigate the VP much faster but could see the final result in cycles and tweak the shaders and light without pressing F12 and have to wait.

I know this could be set manually but with big scenes,
there are a lot of settings to tweak lower and higher all the time.
(20 individual particle systems for example)
With a toggle or even a slider, this would make some things much more comfortable.

-Finetune displacement
-See render resolution of subdivision surface
-Shade in max resolution of textures even when viewport texture size is limited
-See the behavior of a lot of instances in light and shape
-See the character hair with final quality

There is an interesting implementation of this in c4d.
Every generator in the scene is driven by this, even the resolution on curve and mesh subdivisions also the number of drawn instances.
Unbenannt-1

I think some sort of viewport LOD slider could be beneficial for big scenes with a lot of modifiers and or instances in general.

Thanks for reading

Cheers Daniel

1 Like

This sounds similar to the existing Simplify settings:
https://docs.blender.org/manual/en/latest/data_system/scenes/properties.html#simplify

Indeed, Blender already has that kind of simplification.

And I wonder if this could be reorganized as other parts of the properties have been. I mean, many viewport speed hacks, as well as many cycles bias settings are spread here and there, of course for reasons (eg light threshold is inside sampling panel because it makes sense) but I’m thinking if they all could just be grouped inside a “bias” panel since they all share the essence being a “shortcut” for solving the light transport equation. I’d love to have a unique place where to check all my “speed-ups”. Last but not least, having them into a single panel would allow to have “bias presets”.

2 Likes

Yes, but the settings apply also to cycles in viewport rendering.
Maybe it would work if there are two individual simplify settings one for opgenGL and one for Cycles.

A noiseless Cycles render without super high samples that will take days to render and without post editing would be a dream.

There are already denosing methods that do a pretty good job at removing noise without removing too much detail and making the image look blurry or distorted.

There is even an AI-Accelerated denoising tool by NVIDIA which performs perfectly. It’s almost unbelievable how it can take very poor sampled noisy images and still make them look fine as if they were high sampled.

Unfortunately, this AI-Acellerated Denoiser works only on NVIDIA GPUs since it uses the OptiX Engine which relies on Kepler, Maxwell, Pascal and Volta graphics cards. So AMD GPUs or NVIDIA GPUs that are below Kepler can’t use it.

But I think there are other good denoising mechanisms out there that would make Cycles make more pleasant to use without the annoying part of the noise.

1 Like

…for example the Cycles denoise feature

One problem I have often is simplifying particles. I want to limit the view port so it doesn’t lag but I do want to see its affect on the scene when preview rendering (Shift+Z) without enabling 9,000,000 particles in the view port

1 Like

Cycles denoising is quite splotchy compared to the AI Denoiser

1 Like

hi,

DEBUG SHADING MODES in Cycles…would be cool to have an option to quickly override shaders during test renders…something like in Arnold (basic shading, uvs, wireframe, normal etc.) …sometimes its nice to see just simple shader with displacement without any color textures etc.

I know you can do it similar way using CTRL+Shift in Node Wrangler addon but its pretty limited on multiple objects…

Thx
L.

2 Likes

I have a long-time workaround: all my materials are made of one rather simple “ubershader nodegroup”. so that I can instantly change all objects to neutral gray, a ramponode to see the albedo values, ecc…
This also gives me the ability to leave out mirrors and glasses (just don’t use the nodegroup on them)
This nodegroup can be a simple pass-through as in the example below

image

5 Likes

i agree. Also to have an efficient speed in render the tile sizes must be different for cpu and gpu as they both need to render a tile in exact same number of seconds. for ex a gpu might render a tile of 512 px in around 5 seconds, so the cpu need a size for tile to render an equivalent time (same 5 seconds), maybe 32 or 64. this will avoid taking too busy those tiles for cpu wich may be rendered faster with gpu.

(translated via google)
Hello comrades.
I do not know where and from whom I need to be interested in a question that concerns me.
Will the 2.8 opportunity to bake the data in the colors of the vertices?

1 Like

You can already do this, for examples see either bricktricks or oslpy , the only thing that is really lacking is GLSL viewport support as this bug report points out (but it does include a multitexture mix script so that may be interesting to you)

Wow that’s awesome! I didn’t know that. Thank you for pointing it out to me. I’ll delete my original comment.

Were you thinking of something like this?
https://developer.blender.org/D4255

For adaptive subdivision, would it be possible to have a “max subdivisions” option on the modifier as well? If your displacement is just a baked down multi-resolution sculpt, there’s a finite number of subdivs until you’ve matched the original high res sculpt. You may not want to subdivide further even if those triangles are larger than the target size.

https://openimagedenoise.github.io/

Intel® Open Image Denoise

High-Performance Denoising Library for Ray Tracing
Its Apache 2.0 license

When we have a big picture with a big portion of it being sky, so in the end plain empty space, a lot of render time is lost in rendering samples in those parts.

I know adaptivity is something complex and not ready at all, but a basic optimization that take into account that fact could help to accelerate those renders where sky is covering half of the picture and may be easy to implement.

Also it may be super hard… I don´t know, it´s just an idea so we can have the fastes Cycles possible as soon as possible without killing too much dev time for it, because if we are going to get adaptivity in 1 year or so, maybe this workaround can help in many situations and be removed afterwards, when adaptivity is a reality.

Cheers!

EDIT: Thanks for moving it… this is the right place, sorry for creating another thread.

4 Likes

The thing is that when you have a BIG scene, just having the scene in memory can take away A LOT of RAM, for my current scene, just the scene takes around 30Gb right now, and rnedering it makes my computer go over it’s ram amount, wich is 64Gb.
So for this type of situation a simple checker that may allow Blender to fully unload the scene when goes to render could be very useful, it could be slower to go back to the scene after the render, but it will allow to render much bigger scenes without having to suffer for the memory so much.

And I think this should be a mode because we may not want this at any time, it may have it’s drawbacks, like more time to go back to work to the UI and other possible caveats I don’t think about right now, but it would be a tremenduous help to work with big scenes.

Cheers!

3 Likes

Is there any plans to support DeepEXR render to store multiple depths for each pixel? It could solve any “edge” issue in compositing.

2 Likes