Hi! I have stumbled across this topic on right-click select and wanted to open up a thread for further discussion of the subdivision algorithm in Blender.
Now the Blender has its own Subdivision algorithm, which shrinks the models more strongly. If you export your model to other software, it will look different. The opposite is also true. This is especially noticeable on characters and vehicles with clear shapes.
I believe that in such things, the Blender should comply with industry standards and be “like everyone else”. It looks like the problem may be in the new “limit surface” feature in the subdivision algorithm. If it so would be nice to have this feature as an option. And disabled by default for the sake of standardization.
There are multiple examples a whole bunch of users provided comparing subdivided models exported from blender and other existing software packages (Zbrush, Modo, Maya, Max).
Initially topic brought up by user @borschberry when subdivision behavior has caused problems with a model for a customer. Then the same user opened a report in the bug tracker but it was closed down classified as “working as intended” and being not a bug.
IMO, that is a concerning problem that can potentially cause issues upon exporting your mesh to other DCCs and I wanted to hear some feedback on the matter from the developers of Blender. Thank you!
I think I need to summarize the list of problems here:
1 - The algorithm doesn’t work like in other programs. This causes a difference in the shape of the same model in the Blender and other software. In the case of a dense cage, the model looks just a little different (smoother). Low-poly models seriously lose volume during subdividing. The difference is most noticeable on large low-polygonal models (for example parts of terrain) and hard-surface models with exact shape. Less noticeable in dense organic models.
2 - Modeling issue. Regularly, the modeling pipeline divided by stages: creating the overall shape, medium parts, and small parts. The transition to each next stage involves subdividing the model. In the case of a normal Catmull-Clark, this does not create any problems, but with the new algorithm, the model loses volume every time it subdivided.
3 - Unpredictable behavior inside Blender. As an example a model with two modifiers of 1 level differs from a model with a single modifier with 2 levels and the differences are quite significant (more iterations - bigger difference). There is no such problem with standard Catmull-Clark. You can continue to subdivide from any level and you will always get the same result
It looks like the problem may be in the new “limit surface” feature in the subdivision algorithm.
So the idea of new subdiv is to have smoothest surface. And vertices placed on limit-surface (B-surface) as on the 1st picture. But it produces shrinked volume on the convex surface and bulged on concave.
Closest surface is on the second picture: each face trying to keep balanced gaps both sides. Is there a solution for this type of approximation?
And yes, there should be an option to switch off new type and keep similar subdivision with othere applications.
That’s not the only reason for moving vertices to the limit surface. As explained by @brechthere:
Blender always positions vertices on the limit surface, what you get when you would subdivide the surface an infinite number of times. I think this is closer to what you would get with adaptive subdivision in the Maya viewport and USD viewers, or adaptive subdivision in a renderer like Arnold or PRMan.
It means that your vertex position (and the displacement on it) stay the same regardless of the subdivision level, which by itself is generally a good thing if you intend to use adaptive subdivision and not bake a displacement map that is only valid for one particular subdivision level.
ZBrush may not be able to bake for that case, or maybe it’s a matter of tweaking settings, I’m not sure. We could consider supporting the subdivision where vertex positions change at every subdivision level, but this will not work for adaptive subdivision in Cycles or OpenSubdiv GPU acceleration.
So the limit option has a positive influence when doing baking. Plus it’s to get compatibility with adaptive subdivision during cycles rendering.
Yes, this method has advantages. But the fact that the rest of the industry does not use it is for a reason (in modeling and layout at least). Perhaps not only those described above. After all, I would like to have a choice.
However, this algorithm shows itself really well in multires (as far as I can see) and in the render.
The above was meant merely as some more background. The compatibility issue is real, of course. It would indeed be nice to have a quality=0 setting that does not move vertices to the limit surface and that gives you standard catmull-clark behaviour. I looked briefly into the modifier code but don’t see a simple way of adding it (but I don’t know the code well enough).
I wasn’t aware of this and didn’t notice it either, but at work we put some 3D scans through both Zbrush and Blender, and exported them to Rhino for nurbs conversion so we could mill a few art pieces.
It turned out all right, but next time, I’ll probably check for surface deviations between Blender & Rhino.
(It feels as if this needs a checkbox, similar to “preserve volume” in the armature modifier. Preserve volume is obviously better, but the animation looked quite different when I imported it in Unity a few years back.)
At the moment, I am most concerned about using the new subdivide in modeling. Every time I subdivide the mesh to add details the model loses volume. This is quite annoying. I would like to see the old algorithm at least as a modeling tool (especially since there is nothing like this in the blender)
However, I already had a problem exporting the location for animation in Maya. Due to different algorithms, the ground level shifted and the animation had to be adjusted.
I understand why this has its advantages for things like multires sculpting or baking, but the shrinking behaviour is truly awful for subdivision modeling since it makes stacking subdivision modifiers impossible. Two Level 1 Subdivisions suddenly look different than one level Two Subdivision.
So when I want to insert some modifiers (or change something by hand) at level 1 before subdividing the mesh further I have the choice of either not making any changes at all in between the levels and using the more dense level 2 mesh for that (always a bad choice if you can avoid it), or make the changes at the appropriate level and then deal with an incorrect mesh volume and surface down the road.
Note that I’m not a Blender dev, so I wasn’t defending the current subdiv implementation choices . I just tried to figure out what was going on, hence the quote.
But having the option to get subdiv behaviour compatible with other software packages would seem like a good addition.
The volume loss is indeed the difference between subdivided control cage (aka old subdiv) and tessellated limit surface.
I did some poking around the code and found out that blender currently uses osd topology refiner in adaptive mode which, I believe, makes it impossible to recover subdivided control cage as it only subdivides faces around extraordinary vertices. “Quality” setting is actually subdivision level that is passed to topology refiner. What I found is that its possible to turn off adaptive subdivision, setting is already there, it’s just value is hardcoded to true. When doing that I can get old behavior back (no volume loss), in this mode quality and level should be same, otherwise if you set level higher you can see faceting of underlying surface, but that’s mor of UI issue. What I don’t know is how much of a hack such a solution will be. (is it impossible for osd to eval. limit surface in regular mode, and observed behavior is fallback that will change in the future?)
Sorry, It’s my local build that I played with, it has adaptive subdiv off, it’s not exposed in UI. I just wanted to know if it’s possible to get old behavior with opensubdiv, and I think it is.
Oh, ok) Looks promising. I hope that the developers will finally pay attention to this topic, and your solution will help solve this issue. Thank you for your efforts.
Question is, why this functionality (feature adaptive toggle and boundary interpolation modes) was not exposed for the user? It was fairly simple to do, so I imagine there are some deeper reasons that it was not done already. @sergey, @brecht ?