Adaptive Subdivision is still only available in experimental mode 3 years after it was first introduced. Are there plans to make it a supported feature soon?
Also it would be nice to have a preview of the Adaptive Subdivision if looking from the camera in the viewport to see how much certain areas of an object will become subdivided and give the user better information to adjust the settings. Maybe a grid that shows the area of the mesh depending of the screen space like BlenderGuru visualized it in this video from 2016: https://youtu.be/dRzzaRvVDng?t=322
The work required before it can be non experimental is tracked here : https://developer.blender.org/T53901
this is a very interesting feature …
maybe even for the performances?
Definitely, because this subdividing algorithm is based on how close the object is to the camera. Also it supports gradients of subdivisions on one single object depending on which parts of the object are close and which are far away.
It cuts down render time and memory usage and yet bring very high displacement fidelity on the close objects or parts of the object.
Something that’s already a standard in most 3D programs and some video games known as tessellation and I’m waiting for 3 years to get out of Experimental in Blender.
maybe I’m wrong, but I suspect the GPUs are doing this automatically …
and this would explain why objects are light in object mode, and heavy and slow in edit mode
That is likely related to the general performance problem Blender is having when using Subdivision or Multiresolution in the viewport. Low framerates in animation, Undo/Redo action takes several seconds to accomplish, mesh editing becomes a pain, etc. My guess is that the Subdivision calculation interval is low to save CPU but results in low update rate when anything moves.