Why is it so hard to make editting high poly meshes more responsive?

I’m aware of the discussions on viewport performance in edit mode and the developer’s intensions to fix it (https://developer.blender.org/T57936).

Many look at zBrush, which has stellar performance with super dense meshes/subdivs. However, even with a program like Meshlab (http://www.meshlab.net/), there is no delay to select and delete faces/verts of the high res 3D scans I’m working with. Yet meshlab is also open source but has only few developers and they’re able to pull this off. I realise that Blender is a way more complex program, and has to take into account so much more than just meshes, but still. How hard is it to get Meshlab level performance in Blender? What does Meshlab do different from Blender? I’m no developer, and have no good understanding of the source code and how 3D models are visualised in viewports, hence the question.

The same goes by the way for loading models. I load a 2 million triangle mesh in Meshlab in 13 seconds, an action that takes 40 secs in Blender!

3 Likes

How are you loading the mesh into Blender?

By importing an obj file. I know it’s quicker if its in a Blend-file. But I often get obj files to work with.

I believe that it is due to a type of architecture that blender carries with him from the first cries of his birth …
Once for hardware reasons it was simply impossible to manipulate meshes beyond a certain number of polygons … as it was conceived not to have had too much importance or simply not given weight to it, in practice from what I understood blender in edit mode, it leads the defect of having all the vertices re-computed at each cycle of cpu / gpu … thus, it is evident that as soon as a certain threshold of number of vertices is exceeded, manipulating a mesh becomes exponentially heavy, even if small groups of vertices is moved, blender recalculates all the vertices of the geometry …

in practice blender, when moving a small group of vertices, should only calculate the position of that group of vertices in the phase of moving these, and not the entire geometry that remains unedited …

so having investigated, I suggested this in a conceptual way (and not reasoning from coder because I’m not able to) how the problem could be solved …

The obj importer is just a python addon script, one file. Sort of quick-and-dirty. You can edit it fairly easy, I did once to prevent it loading data I didn’t need on a ridiculously sized mesh so as to not exceed my PCs RAM.

As for the editing speed - it’d be partly because blender is a complex program as you say, but you have to realise that all of those complex parts have to be able to influence and edit, or update something when the mesh changes, often mid-edit in an effort to provide the best feedback. So the mesh system is in a way connected to so many other systems it’s not necessarily easy to just switch it all off or disconnect it even when it’s not used, there’ll be overhead.

But the biggest reason is likely simple, the programs you mention would’ve been built from the ground up explicitly to function fast on large dense meshes - Blender has many other priorities competing for dev time.

1 Like

during editing, I refer to the specific phase of moving the vertices, the mesh should be temporarily transferred to a separate box with all the vectors dropped and made independent of each other and all other functions interconnected to others purposes of the software, at most only those that are strictly manipulated and moved at that moment remain interconnected and calculable, and the cpu / gpu should recalculate all the points and the interconnections with the other elements and functions of the software, only in the moments-cycle of the operation accomplished, “with the release of the mouse”.

That doesn’t really work.

  • Subsurface and similar modifiers
  • Shrinkwrap and similar modifiers AND constraints
  • Particle systems, including hair.
  • Vertex parenting

These things and more require calculations on the entire mesh to update. Even editing a small portion of the mesh, even if the edit is in a separate data structure so you’re not cycling through it in the moving of the vertices, those outside systems will iterate on all the vertices you aren’t working on, because they aren’t tied that closely to edit mode, and it’s safer to do so.

Further, a bunch of these systems simply wouldn’t function if you were to try and split the mesh into two separate parts in order to save calculation time and only calculate the changed mesh portion - for instance, particles are spawned in frequency based on the entire area or volume of the mesh, you can’t update only a part of it.

1 Like

yes, but it is a question of “semaphores” of priority … the editing phase of the geometry by hand by the user, I think it should have the absolute priority to make sure that the movement of the vertices is always fluid, and only after they are all the other calculations.

it makes little sense that everything is calculated at the same time in this phase of user editing of geometry, if there is lag and slow down by bottlenecks and heaviness, in the end the result is that the user’s work slows down, becomes difficult and cumbersome.

I want to be clear, mine was a sort of demonstration and so I imagined the possibilities reasoned by a non-developer, a user who gets by in these cases … it’s an example of a possible solution, it’s obvious that a good developer, knowing the problem, and knowing some examples of arranged solution, and having his good skills, he will find the most suitable solution from the coding point of view

There is a Google Summer Of Code project that may fix the import speed.
Accepted projects are announced soon.

You can’t make blanket statements about priority like that though, There’s been plenty a time when the priority for something I’m working on has been to have mid-edit feedback on some linked driver or facet dependant on the geometry so I know what I’m doing. The only #1 priority, is to make it work.

Trying to thread and calculate connected data in the background in some way, to keep editing realtime, is a whole other can of worms, then you have to start dealing with reading the data mid-write - in effect, you’d likely end up having to copy the data out to the scene to use every time the scene wanted to update, whilst blocking writing in some fashion, the blocking would slow down the edit, possibly up to 2x, and you’d anywhere up to double the ram usage, depending on how you did it, depends how much dev time was allowed.

But you also have to weigh up the development time and considerations when you think about how all of these systems are linked to the geometry, some can’t possibly be calculated on only a part of it, but some could possibly be. The tradeoff here is dev time and complexity, it’s much easier and faster to develop the system to work on the mesh as a whole, because for the most part that’s how it’s going to be dealing with it when it’s not a tool in the edit-mode system (e.g. particles, modifiers, constraints etc). Linking those things to edit mode in such a way that they intelligently (where possible) update only partially is quite likely an immense task, and would be prone to introducing innumerable bugs and cases where the edit-mode result is different than the result outside edit mode.

Not to mention that for most of those the simple answer is to simply turn the effect off if you don’t need to see it whilst editing - so the onus and control is on the user.

Nevertheless the effect this has is that edit mode has to presume that the meshdata might at any point be being read by an outside system, where realtime feedback is desired.

You might possibly be able to have edit mode detect if the mesh is being used by an outside influence and thus not update the meshdata during the edit - allowing a faster method that you describe to be used. But that would introduce a lot of dependencies and links between systems that you generally want to avoid. The only way to speed up edit mode and avoid that, would be to tweak the meshdata data structure and have it structured in such a way that modifying (writing) a chunk of it, is as fast of a process as possible, growing in computation time as little as it can as the the list of verities/faces/edges grows. I don’t have enough information though to know how hard that would be, if it’s actually already the case, or if it’s just prohibitive given blender’s internal data structures.

1 Like

there is a simple question:
Relevant applications similar to blender are able to manage heavy geometries in a fluid way. Setting aside psychological theories about what is possible and what cannot be done …
How do these software alternatives manage everything smoothly without too many problems?

That is the question indeed. Reading Antaioz’s comments, the prospect of getting vast improvements on mesh editting speed seems rather slim. But at some point we have to move beyond, ‘it’s too complex, can’t be done’, right? Perhaps I’m respresentative of a relatively small user base that works with high poly 3D scans, but the demand for fast editting high poly meshes has increased a lot in my field (archaeology), and I think this is the case across the board since it has become so easy to make 3D scans with photogrammetry. Even though there’s specialised software for that, I work at a university and prefer to use (and teach) open source software where possible.

So is the only solution really to completely redesign large parts of Blender? Why is sculpting so much faster? It’s also basically just moving vertices around… I know it’s a different system, but can’t the edit system in some way be based on whatever is in sculpting? Or should we get another ‘mode’, like ‘high poly edit mode’, that is seperate from the main edit mode, so it doesnt have all these dependencies to take into account?

I just tested it, Sculpt mode doesn’t update the scene meshdata - this is likely the primary reason.
(You can test this by having something shrinkwrap to the object you’re sculpting on, you’ll notice the shrinkwrapped mesh won’t move whilst you’re sculpting the target mesh)

As for the question, I return to my previous statement

I never stated it couldn’t be done, merely that it’s not likely to be easy, and there are many more considerations than seem to be given credit that should be taken into account. It’s not simply a case of seeing other software and saying “they can do it why can’t we?”.

A “high poly edit mode” would probably be better as just an edit mode setting of some sort, like the snapping toggle. The thing is, even once you’ve disconnected the mesh data from the scene (which itself is a task since I think currently edit mode just interacts with the mesh data directly, instead of using a series of summed delta transforms, a copy of the data, or some other system), you still have to develop and implement the data structures and systems to edit it and save it quickly. And whether that is a priority versus the work involved is the deciding factor.

I know, I exaggarated rhetorically :wink:

I’d like to know what the developers think about this issue, and what the best solution to it would be. If that’s defined, maybe there’s a way to crowdfund a developer to work on this?