Now that unreal has showcased it’s upcoming unreal 5 engine, which seems to utilise a triangle version of the atomview software by neuralize which Sony bought out a couple of years ago, maybe it’s time for a complete 3d workflow overhaul.
The general concept is that faces and triangles can now be limitless, as they’re dynamically generated from a much denser mesh so that there are never more trianlgles in memory than you have pixels on the screen. So if there are 10 triangles covering one pixel, then one triangle is streamed into the engine which is an average of the underlying ten on disk.
This means that there is no longer a need to worry about topology, LOD’s, low poly models, high res to low res baking, and geometry can be so dense that textures can be stored in vertex colour and benefit from the same dynamic streaming (meaning no large textures to move to the gpu), or alternatively if the point cloud for an object has less points than a texture has pixels, then additional points could be generated if the user zooms in beyond the points on disk to accomodate. I also think Blender would run MASSIVELY faster, because the viewport would only ever need as many points as there are pixels at any one frame, whilst the point editing could be taking place on the full point data on disk.
Considering atomview is similar to Euclideon, in which an underlying algorithm generates one point per pixel from massively dense point clouds, probably using something similar to blenders bvh, or nvidia’s rtx, maybe it’s time that we started modelling with points instead of polys, verts and edges. We could model directly to the bvh, for example the points being edited would always equal one pixel, so zooming into a model would allow for modelling fine detail, or whilst still zoomed in increasing the brush radius would manipulate more points (but never more than the screen resolution).
Hard surface modelling could still be achieved by drawing curves, or shapes onto the surface and then moving the curve/shape to dynamically generate more points to accomodate the new shape being described, controlling falloffs and bevels with handles along the drawn curve.
So it would look like this:
But it would actually be this on disk.
It would also mean that all areas of 3d would be unified, because there’d be no further need for edges, faces or vertices, just points. So mesh editing could use all the same tools as smoke, cloth, and water simulations, and vice versa.
Scultping would be crazy fast with so little data being moved around.