Will be perfomance a main target in 2.81?

Talking with user I see that the main complains are

  • SubD perfomance
  • edit poly perfomance in HighPoly models
  • perfomance in scenes with thousand of objects
  • Undo perfomance

Will be it a real target for 2.81? Or maybe some of this points? But for real, to solve the problems in vast mayority of cases.


I hope so, undo performance seems to have taken a substantial hit in 2.80.


given that the other excuses to complain have been resolved, it is obvious that the stakes have grown and now that of performance has become the hottest topic to chat us up right now :slight_smile:

So it is likely that this becomes a priority to resolve for release 2.81 … but I’m not so sure it’s a simple thing to solve …
Perhaps a team will want a couple releases (time) of parallel work to study and do a nice project to solve the root problem?

I don’t know if it’s difficult to get something like I imagine:

I wish that any geometry object was being edited, this geometry-matrix of points selected and edited, behaved as if it were the only existing one, and the gpu and the cpu devoted all the resources exclusively to them in those moments of editing .

why do i say this?

Because I noticed that even though I have a super slow geometry very slow to edit, if I select a decent number of points and split them into a new object, this becomes superfast to edit …
So, according to logic, the road to solution is hidden around the corner in this sense …

logic wants that if a problem is too big to deal with, this must be split into many smaller and lighter pieces


I also began to encounter problems that did not exist before. For comfortable editing a model of 800,000 triangles, you have to cut it into 16 parts. Only this helps.

In my case, the problem with drawing the grid when editing. Too many resources are consuming. And there is no old mode of displaying only vertices without edges (even if you turn off the display of edges in the menu, they are drawn).


There is one more strange effect.
If (for example) you create a high-poly model, add a subdivision modifier to it, and then a decimator, then a long-term calculation of the new geometry will occur. After that, the scene displays this new geometry. But if you try to apply all this, that is, to apply all modifiers (for example, by converting an object (Alt + C) into a simple grid), then, it seems that the calculations of all modifiers pass again.
Perhaps this is due to the fact that the modified grid is not stored in RAM, but only in video memory.


Well, I’ve never been able to edit 800,000 tris models, not even 2.79. But what you say about the vision mode will interest hyperinsomniac and brecht, I had never used it before.

The modifiers have always been like that, I’ve never found it strange either, although it is annoying in complex models to have to wait to apply the decimate that took 20 seconds to calculate.

I ask this, mainly because I asked at the beginning, I don’t know if CodeQuest or eevee, that is, 1-2 years ago. If the optimization was going to be something important, and by it I meant to improve the performance of things that already in 2.79 were very slow. And I think that was not given importance to this (probably because 2.80 went out of hand), but personally I have not seen any improvement, if at all we are the same as with 2.79. And I thought undo performance was intimately linked to the new depsgraph.

But I think that both undo and high poly mesh editing is something that should be addressed as soon as possible because they are important for production. Because the truth is that as a user it would bother me to spend 2-3 years waiting to be able to edit a 50k mesh normally when it is already the norm in video game characters. Nor that with scenes of 10k objects we have to do workarounds to be able to work comfortably.


The problem is not that the modifiers worked like this before, the problem is that it is a 2.8 blender! In the yard 2019ty year. If the new mesh (product of modifiers) is stored in memory as usual, then why does the modifiers applyed process restart? This is strange. You already see the result, it works, but to fix it, you need to calculate everything from the beginning.
This, of course, is not the most important issue. But I think it is worth pointing it out. Or can someone explain why this is so, and that there is no easy way to get rid of it.
But, I think, over time, the blender will go onto the path of houdini, and then the described will become a bigger problem.

Reasoning, if Devs find a smart and diabolical way for this process to become automatic (at a code level, without the user being aware of it) I am fairly certain that this would be a checkmate to the problem of performance in edit mode …
As the heaviness of the geometry increases, blender should create quadrants of lower density that are easy to edit in edit mode.

Very good thing to point and unfortunately, not often enough talked about.

I made a thread a few month ago mentioning the absurd ram amount wasted by blender edit mode
Huge memory usage issue , but the thread about the color of the icons was getting more way more attention and it got forgotten… Hope now the UI is revamped and the icons the nice color, we can talk about the real deal things.

PS: I love UI users attention to detail and the feedback they did, but just remember that the core is the primary thing still.


It is a pity like you are not much.)))
Everyone wants beautiful gizmos and color icons, and forget about real things.

I’m one of the guys that talk about UI a lot. But before that I talked about optimization. Since first minute I think that we saw that optimization wasn’t a target for devs in this version.

anyway, is weird that blender needs 2Kb of memory for each face… when FBX only needs 15 bytes :scream:

I know that a file format is not a good comparison, but between 15 bytes and 2048 bytes I’m sure that could be a middle point…

15 bytes per polygon in edit mode is small. Need a lot more, because A bunch of information (number of vertices, coordinates, edges, indices of polygons adjacent to edges, etc.) should be kept on each polygon.
2048 bytes per polygon may be appropriate. Surely the positions of the vertices are kept separate, and the information about the polygon contains mostly indices. But, if you count the clean data, you can estimate:
x - 8 byte
y - 8 byte
z - 8 byte
? w - 8 byte
? u - 8 byte
? v - 8 byte
? r - 8 byte
? g - 8 byte
? b - 8 byte
v - n4 byts (indices of vertices)
e - n
4 byts (indices of edges)
f - n*4 byts (adjacent polygon indices)
and so on ,

  • alignment to the cache line to work faster

It is not difficult to accumulate 2048 bytes.

But! Of course! If you spend time on icons and gizmos, then this is unlikely to help improve the quality of really important things.


Renderman needs around 20 floats per primitive, witch is 80 Bytes, it’s not anywhere near the 2 thousand Blender needs.

1 Like

80 bytes may run slower than 128

Yes data structure alignment is making things faster probably in this situation, but in any case, I really hope this is the next big priority :slight_smile:


Geometric mesh data for comfortable editing with polygonal faces is difficult and requires a lot of memory. And fbx and all that - there is a purely final representation of geometry, primitive.

That’s why when I write about the fact that developers are doing something wrong, they argue with me and say that I’m a fool, but they are all smart.
Where do you go in these moments? )))

The vertex are shared between triangles, so not a 100 triangles mesh have 300 vertex, in reality you only need 51 vertex. A base model doesn’t have UV or vertex color.

A float needs 4bytes

Not 4, but 8. Because, most likely, for numbers not 4 bytes of float, but 8 bytes of double are used.
And I indicated that vertices are stored separately, and in the polygons only indexes.

But, I’m not sure, it’s possible that you are right, and the coordinates are stored as 4 bytes float