Executable size

I’m worried about the size of the program. When I look at other software programs, for instance like Final Cut Pro, the actual executable is only a couple of MB big. In case of Final Cut Pro only 3.7MB. When you check the rest of the package, 433MB goes to a compressor plugin bundle? If you put all compression algorithms in the world in a single program those will be less than 433kB. There’s something wrong there also.

The original Blender was only 2MB big. Now it’s well over 390MB. I’m worried that a lot of dummy functions get loaded into the executable that no one uses. It’s weird that I have to install a new compiler also. Why wouldn’t regular “make” not work? What’s different?

Can you verify whether the compiler is set to include only the functions that the program actually uses? In a worst case scenario, people could load illegal data into a dummy function as a variable without having to worry about it slowing the program down, because it won’t be loaded on runtime, since the program doesn’t use it. Someone wanting to extract it would only have to scan the executable for the right file headers, like JPGs.

Blender statically links many libraries, which indeed makes the executable big. I don’t have any particular reason to believe that the executable includes a lot of unused code that we could remove. If you add everything up there’s just a lot of stuff. Including for example OpenImageDenoise neural nets which are not actual code but still included in the executable.

Making the executable small and loading most functionality through shared libraries as some other software does is not likely to reduce the effective size. In fact it’s probably the opposite, with dynamic linking the linker is unable to remove unused code in shared libraries.

Not sure what you mean by new compiler or which operating you are using, but generally Blender has the same minimum requirements as the VFX reference platform which is pretty conservative.

I don’t think the security concerns about unused code really make sense.

1 Like

I don’t see why this would cost GigaBytes. You also don’t need neural nets. Essentially you’re just making it fuzzy a bit and smoothening it. Those are really simple algorithms. Every pixel gets the mean average of it and the pixels around it with a radius set by the user.

Edit: I mean OpenImageDenoise.

Edit2: You could just use the mean average of a pixel and only the pixels directly bordering it and if you want to preserve more detail, instead of just going two runs over the original image, why not take the average picture of one run of smoothening and two runs of smoothening?

I wish it was that simple, but there’s a large body of research and engineering effort that says otherwise.

What does it look like when you do what I suggested? And what algorithms does for instance my Canon use when I set it to a lower ISO-value? When I set it to a 3200 ISO value, my images also get this kind of noise. What can we learn from that?

Since i had the numbers, the oidn kernels contribute ~36MB of the final binary size, the total size of blender is just death by a 1000 cuts couple of megs here, couple of KB there, in the end it all adds up, however there are no chunks of “we’ll never need this, but lets keep it anyhow”

I’m not really interested in having this discussion, there’s many research papers if you are interested in the topic. I was just giving one example of something that contributes to the executable size.

If you’re not interested in having this discussion, next time please don’t reply. You’re not even providing actual facts to back up your point.

Other than that most of the research done in universities is horse shit, rehashing other people’s work, repetitive, unclear, foggy, etc. They always turn simple shit into something difficult and incomprehensible. My solution is simple and fast.

It goes for all science. Try looking up how to use a JFET or tubes. How to use them is simple, you sure as hell don’t need capacitors, especially not for audio range signals, but that’s not what the Internet and universities say.

Reading the RCA radio manuals is also very much about reading between the lines and eliminating horse shit.

Computer Science, which I studied and graduated from after first graduating high school at the preparatory scientific level, is largely horse shit with people telling you that it works, so just accept it. What if the systems break down? Who’s going to fix it or make new machines?

And let’s be honest, most of the time it doesn’t work, because they overcomplicated things and it’s highly error prone.

Allright, that’s enough, if you want to chase some strange theories that all compression algorithms will fit in < 400KB and AI denoisers are just “horse shit” that’s fine, but not on our forums.