Moving transforms and matrices to double precision

In VFX many companies would have layouts in real world scale. Depending on the movie you’d have scenes where the action takes place (literally) miles from the origin. Add to that a cm/unit scale and even basic sculpting tools are unusable.

On a recent movie I worked on I had to pop models to the origin in a regular basis just because of this to do trivial operations, but it’s very common on almost every show.

3 Likes

Physics engines have similar problems with precision in e.g. large open-world games. Doing all physics in relation to some object would be impractical, so instead the world is typically divided into grid cells. When the player moves away from the origin into another cell everything gets “teleported” so that the new cell becomes the origin. All calculations can still use plain object coordinates without requiring an explicit reference point. Of course this relies on the specifics of games with a player camera that defines the “region of interest” and may not be applicable to movie environments.

1 Like

Long time ago i have seen this article from Tom Forsyth about precision:
http://tomforsyth1000.github.io/blog.wiki.html#[[A%20matter%20of%20precision]]

Yes, it can become messy very fast.

In one case (full cg) we had a boat that was traveling at close to real world speed interacting with large animals, required water sim plus multiple characters doing stuff on the boat. Not ideal if you can’t/not supposed to cheat things too much cause of stereo shots and whatnot.

1 Like

It is the case when dealing with large scans of streets or environments. The sets are build with real world units, sometimes you might also get camera data from those locations so it is not as easy as moving things to the center. Like Dan2 mentioned, things can get quite complicated in real productions pipelines.

I had this issue before in Eevee.

The near clipping of the camera seems to influence the coplanarity of the faces in the scene. For larger scenes, such as a city, you have to increase the near clipping distance.

In my case, I had objects located both close and far away (around 500m). To address this, I rendered my scene twice with different near clipping settings (e.g., 0.05m and 5m). Then, I composited the two renders using either the mist or the depth pass.

This technique also improved aliasing in the final result.

1 Like

I agree with Brad here. For me the only time the issue comes up is after parenting. And I’m usually very picky about my transformation values. That’s when I started noticing it. If there would be a way to fix this occurrence related to parent-no inverse and apply patent inverse the performance impact wouldn’t be an issue. Maybe it’s possible to let just this part run on double precision? Just looking for a simple solution without breaking everything else. But maybe it’s just me being silly.

For the issue with large scale environments, it’s only really the position component that needs to be in double precision. To save memory for instances, we could only store that part as doubles. For object transforms we also don’t need to store the 4th row of the matrix since it’s not a perspective matrix. So we can store that in the same space as a 4x4 float matrix.

For precision issue with parenting, and issues we’ve had with armatures in the past, I think for every case there is probably a solution that does not involve doubles. But solving all those issues in practice is very hard, and trying to chase down every problem there may just take too much time.

1 Like

So does this mean if I have an object 2km from the origin that I am editing the vertices will not have precision issues? Example when trying to snap one vertex to another etc

Just to make sure - this would not change anything about problems with small scale physics sims like cloth, right?

It doesn’t eliminate all possible precision issue, but this case I would expect to work quite well with double precision matrices.

Probably not, I think most issue like that in physics are not necessarily about digits of precision.

If you don’t mind, would this be applicable to pose bones as well? what about other object types, such as lattices? they’re often parented to rigs for low frequency deformation (head shape, eyes…). In short, would it solve the character animation case?

Assuming all modifiers and rendering involved are implemented with this idea, then it should work as well for animation as for modelling. What works at 1 m from the origin now should work at 536,870,912 m from the origin with doubles.

1 Like

Ok, sounds great !
Honestly hard to argue against a 536,870,912,00 % improvement

1 Like

well lucky right now we just need to solve the parent apply code

Yeah, certainly we can immediately mitigate issues like that by switching to doubles just for the internal computations of relevant operations. And that might also give an initial (very) rough idea of some of the performance impacts we might see. So that’s probably a good first step anyway.

But ultimately it would be good to address these issues a bit more holistically. So I’m hoping we can get aligned on a plan for that.

So far it sound like the general thrust of switching transform matrices to doubles is something people are on board with, pending concrete numbers on performance/memory impact.

@brecht’s suggestion of switching to a 4x3 affine matrix for transforms, and storing only the translation component as doubles, is also interesting. Although that wouldn’t address the parenting rotation issue.

We could also switch to a 4x3 affine matrix regardless, with the benefit that an all-doubles 4x3 matrix is only 1.5x larger than the current 4x4 float matrix, rather than 2x . And knowing statically that it’s an affine transform can potentially make things like inversion a bit more efficient.

2 Likes

The translation portion only as doubles won’t work, unfortunately. Don’t trust me-- go try it out. :stuck_out_tongue:

Quaternions are a good example-- if you do the math in floats you end up with a bad singularity a few degrees wide on the far side. In doubles, the singularity is so small that it feels like you can get right next to 180 degrees backwards without a pop.

I love the 4x3 idea, all doubles. It’d keep error accumulation from rotation, scale, and shear much lower. I also like the idea of keeping vertex positions as floats, seeing as most of them will be relative to their transform origins. I think it’s reasonable to educate people about moving transforms as opposed to moving points off way into the distance.

For Armatures-- I haven’t tried recently, so I don’t know if the issues with moving the object transform have been solved. I’d be up for having bone values calculated as doubles, provided that there’s an API for extracting them as float matrices for the GPU. But it’d be extra helpful for the Armature transform to be easily moved to keep character bone positions relative to that origin as well.

1 Like

For reference, this is how Godot developers handled the issue : Emulating Double Precision on the GPU to Render Large Worlds
Not sure how applicable this is to Blender

I proposed rotation/scale in floats for instancing, as a compressed representation for that specific case. Where we know we could have millions of instances and are not likely to do precision sensitive rotation/scaling math, but do need doubles for positions.

For regular objects or intermediate values in computations I would just use doubles to keep it simple.

2 Likes

It’s the same idea, just going into more implementation details about how it’s handled in rendering.

1 Like