Arithmetic types in Blender

Blender uses (single precision) float for coordinates and pretty much everything else that can have fractional values. The BLI_math and BLI_math_… routines are mostly routines that take floating point arguments, do floating point calculations, and return floating point results. There are a few integer versions of routines and a very few double versions of routines.

A question for discussion, in two parts:
A) Should everything change from float to double?
B) If not, are there other places in the code where calculations could or should be done in double precision, or even exact multiprecision rational arithmetic?

Some thoughts I have on these questions.

Question A: everything double?

Some advantages of this:
a) Occasionally users complain that for very big scenes, the difference in scale between the smallest size of features and the size of coordinates to position things in that space lead to problems when we only have floating point precision. With single precision we can only really deal with about 6 decimal orders of magnitude in scale difference, whereas with double precision we could deal with about 15 orders of magnitude.
b) Similar problems that come into play depending on the view plane vs the 3d space scales. As an example, numerous bugs have been reported with the knife tool that eventually turned out to be related to limited precision problems, and got fixed with very careful epsilon tweaking (that is, making comparisons against some small “epsilon” instead of zero, and adjusting the size of epsilon until things “work”).
c) Presumably other 3d content creation packages use doubles (anyone know which?) and there will be a precision loss when exporting from them and importing into Blender - possibly causing geometric anomalies in Blender when none existed in the original.
d) We could stop having to use “f” suffixes on all our floating point constants, and stop needing (float) casts when using some standard library routine that only deals with doubles
e) Some external packages we’ve incorporated into Blender use doubles internally. This incurs costs like: conversion back and forth; extra storage to store double versions of our data in order to pass to those packages; losing the full advantage of the precision of those calculations by forcing the answers back into floats.

Some disadvantages I see (I’m sure there are more: discuss)
a) Large effort to switch. First, mechanical changes like float → double, sinf → sin, glVertex3f → glVertex3d, 0.0f → 0.0, etc. Probably a script can do that for the most part. Then harder things like making sure that nobody did arithmetic or union/punning or alignment assumptions that assumed float sizes and alignment. Then the even harder problem of finding all the “epsilon tweaking” parts of the code and trying to reverse engineer why the existing epsilons were chosen and what should change if precision is double. And finally, the enormous burden on external users of Blender’s API: all the extant plugins, external renderers, etc. likely have dependencies on the existing float interface. One could perhaps shim that for a while with converters to the float-based API, but long term that is an awful solution.
b) Non-backward-compatible change to .blend in a fairly big way. Maybe OK for 2.8, but either need massive do_version support or else need to provide a way to convert all existing pre-2.8 .blend files into new format.
c) Increased memory usage: probably almost doubles the size of needed main memory for large models. Similarly for disk memory.
d) Performance penalty? I’m less sure here; would be good to see some benchmark comparisons. Maybe the biggest effect is just that more bandwidth between cpu-memory-gpu combinations could hurt performance. Less sure about actual computation cost.

I’m guessing disadvantage (a) is the biggest show-stopper and is what probably makes this whole idea a non-starter. But interested in other’s thoughts.

Question B: places in the code where calculations could or should be done in double precision, or even exact multiprecision rational arithmetic?

I came upon this question in thinking about Boolean operations on meshes. Any time one has to calculate the position of new vertices based on geometric and matrix computations over other vertex positions, we get the problem that limited precision arithmetic means that (a) we have to be careful and probably special case things when we get things near but not quite zero - certainly whenever division takes place; and (b) things that are true based on pure mathematical and geometric theorems about algorithms when all calculations are exact may become untrue when arithmetic is done and results are stored only approximately. For instance, one calculation might say a point is inside a given triangle and another mathematically equivalent one may come to the opposite conclusion. It is hard to write algorithms that always work (and don’t crash!) when building on the quicksand of not being able to trust your math and geometry theorems. The Boolean operations case is a notorious example of this. I can find dozens of papers in the literature about how “other implementations fail on these cases because they don’t use robust arithmetic, so here is our way to solve the problem”. Blender’s current BMesh-based boolean code has some careful sorting-vertex and epsilon-tweaked code to try to solve these problems, but it seems likely that we could easily make it fail by putting in some of the failure examples from the literature (usually cases where intersections lead to very small features - like long triangle slivers).

In general, the code that involves intersection (intersect, intersect (boolean), knife, project knife) will all have these problems lurking in the background, waiting to bite us when someone tries some new example.

What approaches do people take to the problems caused by finite precision arithmetic in geometric algorithms? Here is a brief survey:

  1. Use exact arithmetic.
    This means using a multiprecision package (either multiprecision floats, or multiprecision integers in a rational number package). There are tricks to speed this up: using double calculations first and some technique to decide whether or not one has enough precision or whether one needs to go to multiprecision; lazy evaluation and caching. This approach is used by the most robust Boolean packages reported in the literature. Maybe problem of inconsistent decisions so hard to solve in general that it is necessary to do this. But there are some drawbacks. Speed, obviously, is a big one. It seems like things might go 20x slower and a non-exact floating point solution, even using some of the tricks mentioned early. This is an important issue for Blender, where people want interactive speeds even for operations involving huge models. Another drawback is that it isn’t by itself the complete solution to problem A: when calculated exactly, many cases that are “close enough” will not actually generate coincidences, but instead will create vertices, edges, and faces that are very close to other geometry or very small in length or area. I suppose one would solve this by a post-processing step that dissolves tiny-extent geometry and snaps other very-close geometry to that geometry. Another drawback is just the amount of extra code that would have to be written or pulled into Blender in order to do the exact arithmetic. This is a non-negligeable consideration too.

  2. Perturb the input so that everything is in general position and such that we aren’t going to run into cases where inconsistent decisions are made.
    It may not be easy to figure out how much perturbation is enough for this to be true. Anyway, this breaks the user requirement that we recognize and incorporate special cases into the output. Maybe post-processing can fix, but I have my doubts about that.

  3. Restrict input precision such and then do arithmetic in fixed floating point precision with enough bits to make every decision properly (e.g., look at how many bits are needed to calculate a determinant given a fixed precision input). I think for this to work, you have to not only fix the number of mantissa bits, but you have to restrict the actual scale so that we are essentially worked with scaled integers. If it is possible to make this work, it shares the problems of exact arithmetic except that it could be a lot faster.

  4. Snap all vertex calculations to a grid.
    This is kind of a disguised version of the previous, I think. It makes it easy to find coincident vertices (or does it? if snap is by rounding, then you could have very near things snapped to two different sides), but not sure it solves much else.

  5. Do coincidence finding as part of intersection routines, using epsilon tests. This is what current BMesh intersect does, and indeed, what most code in Blender does. It is easy enough to program, and will find most coincidences (others may have to be found by preprocessing and/or postprocessing). But it is kind of a cross-your-fingers-and-hope approach to the problems of finite precision arithmetic. So could lead to crashes or nonsense results in unforeseen inputs. Maybe we combine this with use double precision arithmetic in code like this where we are more likely to run into trouble.

What are people’s opinions on this? Is this a problem others have run into, and if so, do you have a favorite solution?

I tried making double versions of all the most basic BLI_math… routines and stashed them in a patch - see Blender Archive - developer.blender.org - but it seems annoyingly invasive to our code, and so I am not recommending that we incorporate that patch, at least not until and if we reach consensus that there are a number of places in Blender where we’d like to use double math, while maintaining the core of Blender using floats.

If we wanted to try multiprecision arithmetic, there are some open source packages like GMP (https://gmplib.org/) and GMPFR (http://www.mpfr.org/) layered on top of GMP that we could consider pulling into Blender. I haven’t investigated how much code that would add.

2 Likes

biggest performance hit will probably be rendering on the gpu, consumer cards have notoriously slow fp64 performance compared to fp32 (24x slower (Kepler) to 32x slower (Maxwell), didn’t look hard enough for pascal/volta numbers)

while i do see value in having the modeler parts of blender use fp64, i don’t see cycles moving to it anytime soon.

1 Like

Whoops, I realized that my discussion of question B in the original post has a dangling reference to a “Problem A”. This is the result of cutting and pasting from a larger document where I was musing to myself about Booleans. Problem A is:

Sometimes input will have cases where the intersections have “coincidences” (sometimes called “geometric singularities” - matching verts, verts on edges, verts on faces, edges on edges, edges on faces, faces on faces) or near coincidences. The user expectation is that if these coincidences happen between the parts of the mesh that are being intersected, those special cases will be recognized and make their way into the geometry of the output. Sometimes these coincidences are exact because, e.g., they happen in some orthogonal-to-axis plane and one can construct geometry with exactly matching coordinates on that plane. Other times they are “close enough” that the user would consider them coincidences. Either they placed them there by eye, or some other Blender calculation inexactly place the coordinate there (subdivision of a non-axis-aligned edge, for instance). Users would expect that such “close enough” cases get recognized the same as the exact case.

As a practical example of this, take a default cube, copy it, and rotate the copy by 45 degrees about one of the edges. In the user’s mind, the edge rotated about will match up exactly with the edge on the original cube, and the faces perpendicular to that edge will be coplanar with the corresponding faces of the original cube. Neither of these is true in the current way Blender works (in fact, surprisingly, even the 8 vertices of the default cube do not all have exact +1 or -1 coordinates!). Yet a user would hope that a “union” boolean would make one face each out of the overlapping coplanar faces on each side. If you use exact arithmetic, this will not happen.

(One could take the approach of the current BMesh boolean code and just say “we don’t handle such cases”; but wouldn’t it be nicer to be able to handle them?)

About Question A, the performance and memory impact would be quite problematic. If we did switch we wouldn’t want to do it for images for example, and indeed for Cycles or Eevee/OpenGL it would be slow on the GPU.

If we had C++ and operator overloading, vector classes, templates, … then it would be easier to switch over some parts but that’s not so likely to happen in the Blender C code.

Houdini is the only major 3D app I know of that uses doubles in many places, but there could be more.

Note that doubles mean the epsilons can become smaller, but they still need to be there. Correctly implementing knife or boolean operations wouldn’t be fundamentally easier, but precision bugs in the code would not cause problems as often probably.

About Question B and booleans. The input meshes can already have some imprecision, so I think you need epsilons/snapping heuristics at that point already.

I’m not so familiar with boolean algorithms, but there may be some relatively simple things that can be done to improve existing code like this:

len_squared_v3v3(a, b) < epsilon

One is to avoid using an absolute epsilon value, which doesn’t work well for meshes at different scales. A utility functions with a relative epsilon could help.

The other is that the test needs to be consistent everywhere, and passing a and b in different order may give different results (not sure if it’s the case here, but could be for example when iterating over loops of adjacent faces in opposite order). So it can help to e.g. swap a and b if a > b before doing any float math.

In general it may be good to ensure there is e.g. a single “point in face” function that is used everywhere. Maybe that’s just scratching the surface of precision problems though…

OK, certainly seems like switching to all doubles is a non-starter (as I suspected). Thanks for the input on performance.

About using relative epsilon instead of absolute epsilon – I’m not convinced. I do believe that the epsilon should be scaled to fit the overall scale of the model (or perhaps, taking the scale of the scene that the model will be used in into account). But I think for any particular model+scene, there is a length where the user just doesn’t want anything smaller than that in their geometry: tiny-area triangles, tiny displacements of vertices along edges, tiny long slivers of triangles: these don’t add detail that users want and likely add unwanted artifacts when used in rendering. So I think that at some point, and it is absolute considering the model/scene/camera, the algorithms should actively merge close verts and collapse very small triangles. Using relative epsilon tests everywhere maybe is ok, but I think would fail to do some merges that we actually want to happen,

About using sorting to ensure consistent tests: yes, that can help. And as I said earlier, the current BMesh Boolean code does a bunch of that. But I think it still leaves one feeling a bit uneasy: will this solve all the problems, or are there some hidden consistency problems that we are missing? I think it is that kind of fear that pushes academics who work on this subject to just say to themselves: the only way I can be sure is to use exact arithmetic. They may be right, or they may be being a bit lazy. Hard to know for sure, but the fact that they can find bugs and crashes in every other boolean system they try indicates there is something going on that needs to be fixed in all commercial implementations of Boolean operations.

As a bit of an experiment, I’m trying to implement another version of Boolean operations in Blender based mostly on http://www.cs.columbia.edu/cg/mesh-arrangements/mesh-arrangements-for-solid-geometry-siggraph-2016-zhou-et-al.pdf , After thinking about the arithmetic problems for a while, the approach I’m going to take first is to use double arithmetic and merging of verts that are within (user-specified, but maybe automatically calculated and suggested) epsilon, and careful tests along the way with that epsilon in mind. Rather than exact arithmetic, as that paper uses. I’m writing the code to easily shift back and forth between double and float, so I can compare later. If I’m really ambitious, I might try an exact arithmetic version too,

1 Like

An issue with boolean (maybe bevel too) is that it’s import to make calculations at a higher precision than the input data.

So for a correct result a certain set of cuts might need to be made, however if these locations are already so close - the limits of float precision mean vertex calculations (possibly even placement) are not capable of representing the geometry, making the result unreliable (inside/outside calculation may fail or the output order of locations may be wrong).

The cork boolean library (used in production by Cícero Moraes) - quantizes locations.
Since this library is known for it’s robust operations, I think it’s worth taking this approach seriously.
It’s may be technically possible to workaround the limits of float precision, but I’d like to see evidence - Carve has issues with overlapping surfaces too.

Heres a test patch that adds an option to blender’s booleans that quantizes floats by any number of bits, I tested this and it does fix T47108 although it didn’t resolve all issues in the tracker relating to overlap.
(Edit: the patch offsets each side of the boolean, that part isn’t so important, easy to quantize both sides evenly)

The problem with this is the mesh is degraded by quantizing it’s locations, if we use double precision for booleans we effectively have quantized input by using 32bit floats. So that alone may be worth considering.

This raises the question how to use doubles for some spesific operations, while it’s possible I can only think of changes that are disruptive.

Have BLI_math source files use a float type which can be declared (call it real for eg?), then define single and double versions of the functions using includes and defines.

This is done already for single/double linked list sorting, see:

Doing this for a large portion of BLI math is going to be much more disruptive, although I think it could be done without being too noisy.

  • every double version just gets _db suffix.
  • math functions use a signature:
    void M_REAL_FN(madd_v3_v3)(real a[3], const real b[3]) { ... }

Am uneasy about recommending this, but think it would be workable of having double version of our math libs (also kdopbvh) is important.

Boolean can’t work much better by simply using double precision, such approach only change common issues scale.

The point is about intersection computation using exact math, failing whatever the precision when geoms are pretty close but on the wrong side. Fixed precision and grid approach does suffer from same issues.

Using “fuzzy” less strict math for intersection computation able to take account of “close enougth” elements, is the key to get strongest result, even under single precision.

Another point is whenever boolean sould work on tri or quads/polygons, the tri option being by far much reliable and might simplify implementation.

The current implementation in bmesh already works on tris (with a pass afterwards to dissolve edges to attempt to get, as much as possible, to the original faces). As well as precision issues, a problem I am trying to address with the code I am working on is making it do what users expect when things (vertices, edges, faces) overlap or almost overlap each other.

I’ve been struggling with single precision for some time now. It seems single precision only holds up for scenes where the characters/vehicles/etc only exist within a 1000m area. Once you go outside of that, rigs and props and all sorts of things begin to break down and jitter.

My short on youtube, Star Wars: X-Wing (which Ton actually tweeted about, thank you Ton!), needed a massive space to animate it in. I unfortunately discovered the scene scale limitation because of the scope of the project. In fact, for most of my projects this scene scale of 1000m plagues me. I’m always wanting to have characters in vehicles traveling at fast speeds and this limit is taxing. For my short film Xwing, I ended up having to make the environment go through the ships in order to get my idea to work. It was a nightmare to organize.

As far as I know Maya, Houdini, and I think even UE5 have double precision now so i’m wondering when is Blender going to join the club? I’m one of the animation supervisors at Barnstorm VFX and not having double precision means we have to use Maya instead of Blender on some of our bigger movies. It would be really cool to keep Blender more relevant. Has there been any update on this topic since 4 years ago?

Earlier, Brecht wrote:

One thing that has happened in the last year is that we now do have a decent start at templatized C++ math, so it would probably be a little easier to convert to double now. But still hard: all those epsilon tweaks to find and adjust!

I guess one could convert to floats to get into the drawing and rendering code, but then we may end up tripling current memory usage (twice for doubles and another for a copy in floats, maybe).

As a side point, unrelated to this latest question: my hope expressed previously in this thread that I could make a robust boolean with doubles turned out to be a false hope, alas.

2 Likes

What I believe Unreal is doing is having all object matrices and associated data in double precision, and vertex coordinates in single precision. And then for OpenGL for example you end up rendering in camera space in single precision, or collision between two objects may be in single precision in the local space of one of the objects or somewhere near it, etc.

That kind of approach seems doable without too much of an impact on memory and performance. Also would not have to template so much, though still requires a ton of work, including carefully choosing the spaces to do operations in.

4 Likes

Throwing my 2 cents suggestion here, maybe implementing chunks could temporarily help this issue?
For example a grid of 10x10 that every grid has it’s own chunk origin.

Is there a resource you can direct me for chunking in blender? I’ve heard of this being done in Unreal but i wasnt sure if it was possible in blender.