given that the other excuses to complain have been resolved, it is obvious that the stakes have grown and now that of performance has become the hottest topic to chat us up right now
So it is likely that this becomes a priority to resolve for release 2.81 … but I’m not so sure it’s a simple thing to solve …
Perhaps a team will want a couple releases (time) of parallel work to study and do a nice project to solve the root problem?
I don’t know if it’s difficult to get something like I imagine:
I wish that any geometry object was being edited, this geometry-matrix of points selected and edited, behaved as if it were the only existing one, and the gpu and the cpu devoted all the resources exclusively to them in those moments of editing .
why do i say this?
Because I noticed that even though I have a super slow geometry very slow to edit, if I select a decent number of points and split them into a new object, this becomes superfast to edit …
So, according to logic, the road to solution is hidden around the corner in this sense …
logic wants that if a problem is too big to deal with, this must be split into many smaller and lighter pieces
In my case, the problem with drawing the grid when editing. Too many resources are consuming. And there is no old mode of displaying only vertices without edges (even if you turn off the display of edges in the menu, they are drawn).
There is one more strange effect.
If (for example) you create a high-poly model, add a subdivision modifier to it, and then a decimator, then a long-term calculation of the new geometry will occur. After that, the scene displays this new geometry. But if you try to apply all this, that is, to apply all modifiers (for example, by converting an object (Alt + C) into a simple grid), then, it seems that the calculations of all modifiers pass again.
Perhaps this is due to the fact that the modified grid is not stored in RAM, but only in video memory.
Well, I’ve never been able to edit 800,000 tris models, not even 2.79. But what you say about the vision mode will interest hyperinsomniac and brecht, I had never used it before.
The modifiers have always been like that, I’ve never found it strange either, although it is annoying in complex models to have to wait to apply the decimate that took 20 seconds to calculate.
I ask this, mainly because I asked at the beginning, I don’t know if CodeQuest or eevee, that is, 1-2 years ago. If the optimization was going to be something important, and by it I meant to improve the performance of things that already in 2.79 were very slow. And I think that was not given importance to this (probably because 2.80 went out of hand), but personally I have not seen any improvement, if at all we are the same as with 2.79. And I thought undo performance was intimately linked to the new depsgraph.
But I think that both undo and high poly mesh editing is something that should be addressed as soon as possible because they are important for production. Because the truth is that as a user it would bother me to spend 2-3 years waiting to be able to edit a 50k mesh normally when it is already the norm in video game characters. Nor that with scenes of 10k objects we have to do workarounds to be able to work comfortably.
The problem is not that the modifiers worked like this before, the problem is that it is a 2.8 blender! In the yard 2019ty year. If the new mesh (product of modifiers) is stored in memory as usual, then why does the modifiers applyed process restart? This is strange. You already see the result, it works, but to fix it, you need to calculate everything from the beginning.
This, of course, is not the most important issue. But I think it is worth pointing it out. Or can someone explain why this is so, and that there is no easy way to get rid of it.
But, I think, over time, the blender will go onto the path of houdini, and then the described will become a bigger problem.
Reasoning, if Devs find a smart and diabolical way for this process to become automatic (at a code level, without the user being aware of it) I am fairly certain that this would be a checkmate to the problem of performance in edit mode …
As the heaviness of the geometry increases, blender should create quadrants of lower density that are easy to edit in edit mode.
Very good thing to point and unfortunately, not often enough talked about.
I made a thread a few month ago mentioning the absurd ram amount wasted by blender edit mode Huge memory usage issue , but the thread about the color of the icons was getting more way more attention and it got forgotten… Hope now the UI is revamped and the icons the nice color, we can talk about the real deal things.
PS: I love UI users attention to detail and the feedback they did, but just remember that the core is the primary thing still.
15 bytes per polygon in edit mode is small. Need a lot more, because A bunch of information (number of vertices, coordinates, edges, indices of polygons adjacent to edges, etc.) should be kept on each polygon.
2048 bytes per polygon may be appropriate. Surely the positions of the vertices are kept separate, and the information about the polygon contains mostly indices. But, if you count the clean data, you can estimate:
x - 8 byte
y - 8 byte
z - 8 byte
? w - 8 byte
? u - 8 byte
? v - 8 byte
? r - 8 byte
? g - 8 byte
? b - 8 byte
v - n4 byts (indices of vertices)
e - n4 byts (indices of edges)
f - n*4 byts (adjacent polygon indices)
and so on ,
alignment to the cache line to work faster
It is not difficult to accumulate 2048 bytes.
But! Of course! If you spend time on icons and gizmos, then this is unlikely to help improve the quality of really important things.