Sorry to burst your bubble, but you cannot get 16 digits of precision for Blender vertex positions. Inside Blender, all coordinates are represented with C “float” type (6 digits of precision), not “double” type (16 digits of precision). Python, which you are using in the api, using doubles for all of its non-integer numbers. So when you do sqrt(2) in Python you indeed get that value to 15 digits of precision. But if you assign it to a vertex coordinate and then read it back into Python it well get truncated to 6 digits (about) of precision, and then when it comes back out into a double, you get a different value. I just tried this:

``````>>> me.vertices[0]
bpy.data.meshes['Cube'].vertices[0]
>>> me.vertices[0].co
Vector((1.0, 1.0, 1.0))
>>> me.vertices[0].co = Vector((1.0, 1.0, sqrt(2)))
>>> me.vertices[0].co
Vector((1.0, 1.0, 1.4142135381698608))
>>> sqrt(2)
1.4142135623730951
``````

Don’t be deceived by the fact that the coordinate of the Vector has digits that goes beyond 6 – that is spurious looking precision caused by converting from binary to decimal.

It is very fundamental throughout a large amount of Blender’s C code that coordinates are floats, and while there are occasional threads discussing changing this to double, that is unlikely to happen at this time (there are a bunch of cons to balance out the pros).

7 Likes

Eeeeek! I stand corrected, but 1 micron is still good precision when using 1 Blender unit per metre, I’ll stick with that as you can always use 1000 Blender units per metre…

2 Likes

Man it has been a long time indeed since I tried to learn programming . I didn’t remember double and floats and used ints and floats instead lol . thanks for the clarification

1 Like

unfortunately I don’t find the commit anymore, but I remember that some months ago someone had added the double floating point library for greater precision … I don’t remember if it had been @ChengduLittleA or @jacqueslucke

Maybe they are using it for specific area. Using it across all blender would be a huge milestone that will take time. I think I read a discussion about it among Dev’s and they said it will cause a significant jump in memory use that may not be justified considering its going to be only useful for cad ? I don’t remember for sure though…

1 Like

Some people (including me) have been adding double versions of some of the math and geometry routines into Blender’s internal library. These are only used right now for intermediate calculations, for greater precision as you say. When results get stored back into the permanent geometry in Blender, it is still always as floats.

For some discussion on why going to all doubles would be problematic in terms of memory and performance, see Arithmetic types in Blender

5 Likes

Do we really need to go beyond 6 decimal places, one micron at 1 Blender unit per metre? If you set many Blender units to a metre, for example, you are increasing your accuracy by one order of magnitude for each zero you use. I don’t see this as a problem for CAD applications.

Please feel free to shoot me down in flames, if you disagree!

Cheers, Clock.

Keep in mind that we’re counting significant figures. Single precision floats can accurately represent 1.00001 and 10000.1, but not 10000.00001.

I like to use the mid-point of the precision range — 100.001 — as a point of reference for unit scaling, but I don’t have any strong technical arguments as to why.

2 Likes

I should have qualified my remark, yes I agree entirely, I micron accuracy on a part up to 9 metres long is more than enough, for tiny parts, you can increase accuracy by setting say, 1 metre = 0.001 Blender units, within the confines of 6 sig. fig. So then 1 Blender unit = 1mm - effective accuracy increases by three orders of magnitude.

2 Likes

On an entirely different matter, we have now introduced a “Reset Views” function to PDT View Control:

This resets the view locations scales and orientations back to Blender Factory Default - how it looks when you open a new project, so if you get hopelessly lost in your project you can’t least get back to the initial view layout. This works for randomly rotated views and orthographic views.

Cheers, Clock.

5 Likes

The above has now been committed and tagged in the github repo as v1.1.6 and merged to the official Blender addons/ repo.

5 Likes

This is also a common problem in Rhino still. Yes, many places use the double precision when possible, but that often breaks up when doing large models with tiny units - say a tanker in mm. Precision then starts to suffer when you’re joining surfaces resulting in invalid polysurfaces. The problem is commonly known as far-from-origin problem.

3 Likes

Hello Sir, this gives me a thought. Back in the 1980’s (yes I am that old), we had similar problems with only having single precision floats. If I remember correctly, (bearing in mind my age) we would overcome the problem by modelling all the components around the world origin, then making “cells” of them thus adding them to a “cell library”, or component library if you prefer. These could then be added into an assembly drawing and moved to a known integer location.

So, you would work out on the assembly a location for the origin of the component that was an integer set, say 10000, 3000, 1000. and then draw the component with this location transposed to 0, 0, 0. Am I making sense here? This means that offsets on the component vertices relative to its origin are held as 8 sig. fig. millimetres for example, but the location of the component in the assembly can be held as say 8 figure integers, although they were stored as floats of course. Just to qualify 12312342 is 8 sig. fig. as is 9.987987, if my memory serves me well.

When you plotted the drawing of the overall assembly, any limits on the system calculating the 8 sig. fig. decimals against world units would be lost in the overall scheme, but were far less than the line width on the plot/screen view.

I believe, correct me if I am wrong, that Blender stores vertex locations relative to the object’s origin, not in overall world units relative to world centre. In this case we should be able to make tiny components for large assemblies. I obviously need to check this outrageous claim and will do so over the next few days.

What I intend to do is make some components in a “Library” file, then add them into an “Assembly” file using PDT Library tools and I will report on how it goes when I offset them a large distance from world centre. It will be interesting to see if two components that abut at world centre, also abut when you move them a long way! Large components, say a train chassis, would not need to be drawn to a massive precision and 8 sig. fig. would be more that enough, so that could be done within the realms of standard tolerances, tiny components on the train could also be accommodated.

We should not forget that everything that is to be manufactured must be subject to reasonable tolerances on dimensions relative to it’s overall size - there are standards for this…

Cheers, Clock.

EDIT:

It is interesting to work out the width of a plotted, or hand drawn, line on 0.5mm thickness at various plot scales, say 1 : 5, 10 : 1, 1 : 500, etc…

2 Likes

I’ve been playing around a bit with the PDT Design UI over at blenderartists.

2 Likes

You are correct.

This is how far-from-origin can be handled in Rhino: https://wiki.mcneel.com/rhino/farfromorigin

Anyway, in Blender you’ll quickly see jagged lines when you do try to create huge scale scenes. It is a visual problem mostly in that case, because mesh data is still using the object centre is their relative origins, but quite a big issue nonetheless when trying to model in such a file.

1 Like

Yes, that is sort of what I was saying, so I tried my experiment:

Two half spheres of 1m radius moved 200,000m

Then moved another 200,000m.

The actual geometry face display is still OK-ish, but the graphics display is totally FUBAR. Here are the edges, definition has been lost:

Interestingly the two objects still line up at the joint.

So, I guess we need to model tiny parts to dimensions, but use simplified, toleranced mesh for very large overall structures. Having said that for Mechanical Designer this is not a problem as we don’t tend to make things 400Km across too often, for Architecture at world coordinates it is a problem, so survey data may have to geo-shifted to world centre - small price to pay for not having to buy expensive software?

Cheers, Clock.

8 Likes

hey @clockmender I want to try to replicate this W18 Solenoid electric motor with your tool, do you think I will be able to?

1 Like

Not sure, but I could, if you give me some drawings.

Cheers, Clock.

2 Likes

I have had a request to change the colour of buttons when you float the mouse over them and/or to grey-out other buttons that cannot be used with the button in question, is this possible? Searches on the 'net reveal no answers.

Here is a mockup of what the user wanted:

Cheers, Clock.

1 Like

Note about precision metal manufactors usually work with a tenth of millimeter. Glass manufactors til about 0.01 some 3d printers can do 0.023mm real special precision manufactors (car engines, optics) that’s about 0.001 mm a human is not able to fit things beyond 0.025 requires robotic precision to fit such stuff.
Formats such as fbx step and igis can as well notate tolerances as required for crafting precision tools. Stls blends objs don’t have that the reason the CNC industry is not yet willing to use it, despite several people here use it in industry since blender has such a rich editor. I hope speaking as a maker one day blender will support truly step as well. I don’t think it’s far beyond hardsurface modeling as we do today.

3 Likes