Note that I have some data in viewer, but it losts after realizing. Maybe the patch is not in the build yet? I tried December 16, 02:21:44 - b265b447b639
It would be better if you say what exactly you’re trying to achieve. Because using data on instances is very limited. For performance reasons, you may turn off realize instances in viewport and get back in render:
That would be the same, it would kill the system memory speaking in a complex scene.
The thing is that we may need to store information on instances, like a color attribute per instance for example, not modifying the geometry per-se, but storing some data on every instance.
so that’s not possible yet ? In this tutorial Bradley use some custom nodes to achieve that in 3.0 but i thought the dev added the option to do it in 3.1
@MichailSoluyanov trying to have the same result without realizing the instances, we should be able to store attr on them and not on the mesh
Hi, I Used this part of the node tree to generate UVs on the procedural wires/ropes. If I used the closed circle profile I had a wrapping issue on the UVs, so I opened the circle and snapped the last vertex to the first by transfering the position using indices:
I wonder if Is there a way to write the normals just like I did with the position, and get rid of the hard edge. (There is no merge by distance yet, right?).
I mean, copying the normal of the last edge point to the first.
With the closed curve profile,looks like the u value of the last vertex, gets interpolated with the first, and It becomes visible on the corresponding “seam” on the extruded curve(I’ll post a screenshot asap). That’s why I wanted to break it and snap the vertices. The problem now is that I have the hard edge that I would like to fix by copying the normal, not sure how to output the normal to the shader and use it, my goal is to copy the last vertex normal to the first one to hide the cut. But besides this particular use case, is a way to manipulate/write normals planned? Just like we do with position attribute. Or is it even possible right now and I’m unable to figure out how?
In theory, yes, because you find the texture value for each vertex. Thus, you depend on the mesh resolution.
But if you first create points, and then, depending on the texture, remove the points, then the quality will depend on the density of the points.
Yes, I understand what you mean
After all, without islands of polygons, we cannot make gap.
I forgot this.
I think it will be easier to do the 3rd component of the UV vector as an inflection factor to create a sharper transition.