Will be perfomance a main target in 2.81?

no, Pablo is also working on some interesting tools for the mesh
for example Blueprint tool
https://developer.blender.org/D5344

Soā€¦

  • First of all, and it would surely be one thing that would give us a lot of headaches is to make the mesh core system multithread

  • Then adopt tricks like for example a subdivision in more portions of
    the objects when they start to become heavy of data ā€¦

  • And finally, to see if there are ā€œcompression and decompression algorithmsā€ that allow to better manage the mesh data size and memory IO ā€¦ without sacrificing performance.

  • If these operations can be carried out by the GPU rather than the CPU ā€¦ so much gained

I would also be interested to hear about performance in other areas such as undo, hair editing with children, weight painting on high poly meshes, import export (now that the GSOC for that has died), etc.

As a side note, having just recently converted (partially at least) to Blender I have to say that Blender users are extremely spoiled. :slight_smile: The level of transparency is unparalleled, having access to developers email threads, conversations about development is unlike anything Iā€™ve seen in production.

Normally developers would work in sprints, a dedicated coordinator gathering and filtering feedback, passing it on, and leaving the devs alone for a given amount of time to work. Iā€™m looking at the bug tracker, and itā€™s the main developers spending time addressing and closing tickets which to me just seems crazy. It must be de-motivating and a huge waste of their time.

Iā€™m really glad that the Epic grant seems to be partially used for organization, right now it seems to be very messy and wasteful.

Donā€™t get me wrong, itā€™s easy to get excited about having access to all this information, Iā€™m just concerned that it makes development much harder than it needs to be.

2 Likes

Is there anything in particular mode which is slower?

The main problem is a undo in object mode with a lot of objects. That need a lot of time. Making hard anything in complex scenes.

But i donā€™t know if it is more than before

1 Like

according to recent discoveries, with an ssd disk the situation improves a lot ā€¦ so for heavy scenes, it is a question of bottlenecks between ram and hard disk ā€¦
(you will probably already have heard of it, just make it known here too)

1 Like

if the ssdimprove perfomance with a classic hard disk must a hell work with it

I didnā€™t hear of this, although HDD speed should only be related to initial loading, not undo (unless the system swaps memory to disk).

1 Like

probably when people start to increase the objects in the scenes it just happens this ā€¦ blender (or the os) starts to download the ram on disk and everything slows down ā€¦

another question of mine is
also happens for the gpu ram?

To give a more direct answer to this question.

Developers have replied to the mailing list:
Blender 2.8x - support and core development goals.

While performance is mentioned for 2.81, Itā€™s not a main target.
I can only speak for myself, here are some reasons:

  • There is a huge number of bug reports coming in.
  • There are projects that were only finished to a basic level for 2.80, which need attention.
  • Some projects missed 2.80 release but are completed and ready to be committed.

Having said that, I suspect there are a handful of noticeable performance regressions which users run into. These should get prioritized above more general optimizations.

4 Likes

Although itā€™s okay, and the truth is that this brief answer is the most widespread Iā€™ve seen commenting on the ideas of developers on this subject in blender. At least half of the performance issues discussed here are new in 2.8, and didnā€™t exist before. The new SubD gives a much lower performance than the existing 2.79 and the current undo is simply impossible to use in large scenes. These problems should be considered as priority bugs to fix in 2.81.

And we have been referring for years to the other performance problems and some that were supposed to be solved thanks to the new depsgraph, or at least it was supposed that the performance problems in some subjects would be reduced like those of working in scenes with many objects. That while there are many interesting features that can be added to blender, if you canā€™t edit complex scenes, itā€™s of little use.

I think there are many years of lack of information from users about performance problems, their origins, what are their possible solutions and what are the plans to solve these problems. It would be good, an entry in code blender, or in this forum, just talking about the performance problems of blender and answering specifically to some questions (beyond the obvious answers).

  • Why is it slow to edit objects from 50k tris?

  • Why is performance so bad in complex scenes?

  • Why is SubD slow?

  • Wonā€™t depsgraph solve some of these problems?

  • Can these problems be fixed without touching the blender core deeply again?

  • Will it be taken into account when fixing performance problems in dense meshes the needs of face and edge groups? Already needed today, and mandatory for Everything nodes.

  • how will be solved that problems? how the complex are the solutions?

2 Likes

Thank you, I appreciate you pitching in and addressing concerns. Reading information here is much more accessible than going through mailing lists.

I guess improving performance in certain areas isnā€™t something that could be fixed in a 3 month cycle either - Iā€™m glad itā€™s on the radar though.

A relevant factor is that with the hardware power, the large amount of RAM available, the power of modern CPUs and GPUs, it is normal that today people expect high performance with full-of-geometry scenes ā€¦
Blenderā€™s competing applications have accustomed people to high performance for a few years ā€¦
And now that there is a transfer of new people, this factor becomes substantially more evident, above all that now there are no more excuses on the agnostic interface the complaints about the performances emerge ā€¦ and it is only for this that you must prioritize the resolution of these problems ā€¦

1 Like

Yep, good point. For big companies hardware expenses are not an issue, loosing an hour or more a day for each artist because of performance issues is.

In some departments like loading times Blender is really good compared to Maya for example (Iā€™ve seen anim scenes loading/saving for 45+ minutes), in some other areas itā€™d be very frustrating to use in bigger productions, in some cases even for smaller projects.

at least bringing 2.81 performance to 2.79 level should be the main concern before further optimizations, so we can have something to work with.

1 Like

Now I think it becomes relevant that performance focus become a key since more and more people start pointing it out.

The previous goals:

Based on the feedback in the previous roadmap article, thereā€™s enough to do for 100s of developers on projects the coming years. That means we will have to prioritise. Whatā€™s possible to start working on in Q2 is:

  • Overrides and asset management
  • Better support for large scenes or complex environments
  • Modifier nodes
  • Physics & real-time mode (for designing simulation and baking)
  • Cycles: reviewing/adding patches

If the development fund grows beyond 30k:

  • Particles and hair nodes
  • Texturing tools and tools for procedural textures
  • Painting and Sculpting improvements
  • Better snapping and precision modeling
  • Cycles: denoising
  • Compositor

Let me add a new line since the dev fund is way beyond initial plans:

If the development fund grows beyond 60k:

  • The core will be greatly improved so Blender can handle pro expectations performance.

With great money comes great expectation.

4 Likes

Giving off-the-cuff answers to these questions risks speculation & spreading misinformation.

And full answers requires more investigation which is better done when weā€™re ready to resolve the performance regressions.

In general there are areas we know can be optimized and just didnā€™t get attention yet, so I donā€™t think it helps to communicate details here, when further profiling & investigation is needed.

For example, in edit-mode buffers are being copied to the GPU which donā€™t need to be, which is an obvious candidate for optimization.

10 Likes

Getting a pretty nice increase in edit speed in the latest build (cb7ead2e3b62) cube subdivided 8 times, applied, 5.2 fps instead of 3 fps I had before.

Bump. No way this thread gets unacknowledged :bulb:

1 Like

Donā€™t bother