New workflow ready for next gen game engines and film

Now that unreal has showcased it’s upcoming unreal 5 engine, which seems to utilise a triangle version of the atomview software by neuralize which Sony bought out a couple of years ago, maybe it’s time for a complete 3d workflow overhaul.

The general concept is that faces and triangles can now be limitless, as they’re dynamically generated from a much denser mesh so that there are never more trianlgles in memory than you have pixels on the screen. So if there are 10 triangles covering one pixel, then one triangle is streamed into the engine which is an average of the underlying ten on disk.

This means that there is no longer a need to worry about topology, LOD’s, low poly models, high res to low res baking, and geometry can be so dense that textures can be stored in vertex colour and benefit from the same dynamic streaming (meaning no large textures to move to the gpu), or alternatively if the point cloud for an object has less points than a texture has pixels, then additional points could be generated if the user zooms in beyond the points on disk to accomodate. I also think Blender would run MASSIVELY faster, because the viewport would only ever need as many points as there are pixels at any one frame, whilst the point editing could be taking place on the full point data on disk.

Considering atomview is similar to Euclideon, in which an underlying algorithm generates one point per pixel from massively dense point clouds, probably using something similar to blenders bvh, or nvidia’s rtx, maybe it’s time that we started modelling with points instead of polys, verts and edges. We could model directly to the bvh, for example the points being edited would always equal one pixel, so zooming into a model would allow for modelling fine detail, or whilst still zoomed in increasing the brush radius would manipulate more points (but never more than the screen resolution).

Hard surface modelling could still be achieved by drawing curves, or shapes onto the surface and then moving the curve/shape to dynamically generate more points to accomodate the new shape being described, controlling falloffs and bevels with handles along the drawn curve.

So it would look like this:

But it would actually be this on disk.

It would also mean that all areas of 3d would be unified, because there’d be no further need for edges, faces or vertices, just points. So mesh editing could use all the same tools as smoke, cloth, and water simulations, and vice versa.

Scultping would be crazy fast with so little data being moved around.

3 Likes

I thought that they was using a version of the meshlets technology.

I don’t know for sure, just guessing because of sony’s involvement with atomview, I think the above would still be applicable though regardless.

I don’t know the technicals details, but I think that you can find a blog of the Epic developer explaining the technology and it’s based in meshlets and meshshaders variant of AMD (I think that meshshaders is the name that give Nvidia).

Basically they divide the mesh in pieces of 32 triangles and render in a new fast pipeline

1 Like

What is real is that blender actually have a lot of problem to work with high poly meshes (+100k) and it will be the standard in a few months

2 Likes

Yep, just checked the nvidia website and meshlets are indeed able to be generated from point clouds:

Other use-cases not shown above include geometries found in scientific computing (particles, glyphs, proxy objects, point clouds) or procedural shapes (electric engineering layouts, vfx particles, ribbons and trails, path rendering).

So a completely unified point based 3d modelling and simulating workflow would be compatible, and also finally remove the need to consider topology, and would make it possible to work on unimited detail without any viewport slowdown.

There’d be no need for manual normals either, as they’d be calculated automatically from the relative position of the surrounding 8 points.

here’s the neurolize atom view I mentioned. It streams only the points required to make the render to the gpu in real time using a bvh or some such to act as a google for points, no textures, all point data.

The render time face generation would be a great addition to cyces for point cloud renering also: