Proposal: Bump minimum CPU requirements for Blender

A quick note that the latest CPUs still have mitigations.

I forgot that Intel said they would not provide mitigations for CPUs older than 2008.

I think no AMD CPUs that can run W10 64-bit lack SSE3. AMD would have introduced SSE3 in 2005 but CMPXCHG16B only in 2006.

@retro you have made your point, please slow down a bit

3 Likes

I’m not sure what is with Blender obsession supporting completely outdated hardware to begin with, especially if cutting them off going to benefit performance or stability of Blender and other 3d packages did it long time ago.

Users on such outdated hardware most likely aren’t there for cutting edge new Blender features and are hobbyists who can easily utilize older version of Blender for their needs, especially now when we have LTS builds where 3.3 will support them until 2024.

1 Like

My purely personal opinion on this is that any tradeoff has to be weighed very carefully. For me this proposal can be restated as “Can our most-affluent users get a performance increase if we abandon our poorest users?” If so, is it enough to be “worth it”?

It is easy to try to judge this in terms of our own local values and economies. But in many areas of this world we have some users where the cost of a computer is more than the yearly average income. I personally find greater satisfaction in helping our poorest users use Blender than speeding up the work of those with the most.

That said, “x86-64-v2” (CMPXCHG16B, LAHF-SAHF, POPCNT, SSE3, SSE4.1, SSE4.2, SSSE3) sounds reasonable to me. Seems like a nice change for Blender 4.0 if we announce the change fairly early. This project tends to give little notice about large changes.

9 Likes

The x64 feature levels are controversial. They were defined between Intel, AMD, SUSE, and Red Hat. The x86-64-v2 for example is thought for enterprise and it does not include AVX because of the Atom server chips. Ubuntu, Debian, FreeBSD for example do not require this.

1 Like

Bah, humbug. I want my performance no matter the cost :laughing:

My guess of a timeline is something like 5 years from now (~2 years until it happens and another ~2 years of official LTS support). By then it would be ~20 years old tech. Whether it happens or not, amazing stuff and commendable.

I just don’t see it as being abandoned, because using a previous version of Blender isn’t that bad. Still able to work on anything, still able to learn on. It would be a really long time before anybody would be looking at a 2.79 vs 2.80 change, I bet. Even then, it is relevant knowledge that transcends application.

Also, open source. I hereby volunteer to help whoever in the year 2525, if man is still alive, with blender version 3.5.

The problem is that, on Linux, which is attractive for giving old hardware a new lease on life, it’s also more difficult to get old versions of software running without compiling it yourself… possibly having to compile other dependencies too if the current versions packaged by the distro are no longer API-compatible or have been dropped because nothing uses them anymore… which in turn could require patching or other effort to get them to compile.

This is made worse by no distro I’ve daily driven including LTS branches of Blender… only the mainline versions.

(And solutions like Appimage aren’t an easy fix because it’s hard to figure out which libraries will be dropped and which must be provided by the system to avoid the “every time I update my distro, I need to delete at least one bundled library from one of my GOG.com games to solve a newly introduced segfault-on-start issue” problem.)

1 Like

I hereby volunteer to compile blender for you in ~5 years, should it still be necessary. If you need HIP, I hope AMD has released an SDK or something for it by then :wink:

I think my stance can be summed up with, desperate times call for mildly inconvenienced measures. In context especially, like if you can’t get ahold of some min requirements hardware, you are going to have to make up the lost time/energy in other ways. Whether it’s compiling, modifying source, or just making do with older OS that plays nice with it. Keep it offline, just for work, whatever. If it matters enough, it matters enough. I, and no doubt many others more capable, will help.

2 Likes

That falls victim to the “if people need to ask, it’s already too late” problem.

Only a tiny fraction of people will go to the effort needed to find you and only a smaller fraction will.

If support for older x86_64 CPUs is to be dropped before less enterprise-oriented distros (i.e. distros other than Red Hat, SUSE, and their derivatives) are going to drop it, then there should be a well-considered and official plan for maximizing the ability of people on those distros to get access to working LTS builds, even if it’s just outreach to them to encourage them to also package LTS as an alternative package.

I have full confidence of a well considered and official plan, the thread existing is enough evidence of that. Nobody will be reliant on me or any unofficial support unless it is purely by their own choice.

You all made your point. :slight_smile: Please let’s not turn this thread into an off-topic discussion. Among Blender developers repliying in this thread, x86-64-v2 sounds like a sane target. Now it’s time for benchmarks.

Once these are there, we can continue discussing the “when”.

Further off-topic comments will be removed!

3 Likes

Hi, I build Blender on Linux with -march=native and test the usual demo files.
I got marginal differences in render times, > 1%, sometimes the default build is even faster.
I can try other settings but march=native should do it, or not?

AMD Ryzen 7 4700G
SSE 4.2, AVX2

Cheers, mib

Cycles already uses specialized CPU kernels, tailored to several CPU instruction sets, for its processing. You won’t see much gain there by just building with new options.

You might see it with heavy geometry processing or particle systems though. Unfortunately things like the particle systems, cloth sims, and Mantaflow are end of life and not being maintained, improved, or fixed. Those are the types of areas that could see decent improvement but gains there probably don’t mean much any longer.

Maybe some boolean scenarios would improve? Maybe operations which force the recalculation of mesh normals in either edit or sculpt mode would see some benefit? Maybe texture painting becomes better? Maybe complex armatures animate faster? etc. etc.

The remaining areas that could show an improvement are harder to quantify and would require much more work to check. For instance all of the dependencies that Blender uses. Recently SSE was enabled for ffmpeg which yielded good results for VSE scenarios. Maybe OpenVDB scenarios for volume processing (not rendering) through geometry nodes would show some gain.

There needs to be a plan for what user-facing scenarios we’d check. The above is not the complete list.

5 Likes