glTF I/O feedback on bpy and technical open points

Hello,

I wrote a document with some open point regarding glTF / bpy usage / pain points.
It involves multiple modules (Anim/Rigging/Mesh/IO/Shader, etc…).
Could be great if devs can have a look on it.

Let’s start the discussion!

2 Likes

API to simplified node tree to avoid checking entering/exiting node groups / reroute

I added an issue for it here: #114806 - Shader node API to get inlined node tree - blender - Blender Projects

API to get data from sockets

This sounds like an API to making automated texture baking easier. I think this would at least wait until we do bigger planned refactoring for baking.

In some ways doing this as part of export is not ideal, since it’s quite slow and difficult to predict what the result will be like. We have some plans to add bake shader nodes that can either pass through when unbaked, or read the baked result. And then you can specify the desired resolution, UV map, etc, there.

If we have such bake nodes and a corresponding API, then perhaps it’s not a big step to make an API to create, bake and remove them.

3 Likes

Changes directly done on Blender addon repository

Generally when making API changes we try to make sure that the add-ons keep working as well, at least at a minimum level. I think that’s a good thing, and coordinating with add-on developers each time would add too much overhead.

Can you modify the script you have to somehow check for changes done in the Blender repo, compared to the last time you copied the code from upstream?

Breaking changes

I don’t really have a good practical suggestion for this.

We could add a label to pull requests when they contain breaking changes, and figure out some way to get notifications from that. But it wouldn’t necessarily give a lot of lead time. I don’t think we would want to add delays to merging pull requests for this.

We could have some Python API version that CI can compare with, but that would still be after the pull request got merged.

Fundamentally I don’t think a daily build is going to have much advance warning, it’s meant to be constantly changing and iterate towards the right functionality and API.

1 Like

Validate() doesn’t check shape keys

I guess it would be relatively straightforward to implement this.

SK can’t be retrieved with apply modifiers

Shape keys are effectively the first modifier in the stack and get applied immediately. It would makes sense to have an option preserve them, next to preserve_all_data_layers. For this we would need to do some bigger changes to shape key storage, to turn them into attributes which is something we’ve wanted to do, but it’s not simple.

Curve with modifiers
We currently can’t convert curves with modifiers to meshes.

I’m not sure why this is, as far as I know Cycles can render these and uses the same API?

Attributes
Is there a way to retrieve a list of attributes that are not automatically created (material idx, sharp, etc…), but are really created by the user?

You can skip attr.is_internal() and attr.is_required(), but that doesn’t cover these. There could be some API for these, but not sure what the right name is. attr.is_native() or something like that?

Storing custom data in .blend files

It’s not supported currently, but I think adding custom properties to bpy.data would make sense.

1 Like

Perhaps simply just using the “owners” functionality in the new gitea release would be enough?

This way any changes we make to those files in PRs will automatically tag the owners of those files.
We don’t have to wait for a review from them, but at least then they will have gotten an email notification that someone made changes to their code.

Switching action doesn’t reset no more animated channels

It’s not clear to me what a good solution to this would look like, given how Blender currently works. Bones and objects can be positioned by the user without animating them, so always resetting non-animated channels would cause problems. Non-animated doesn’t necessarily mean non-posed.

I can imagine a hypothetical future Blender where always resetting could work, by only having rest positions and animation. So if a user wants to position something without animating it, they have to do so by adjusting its rest position. I don’t know if that would actually be a good design, however.

Objects don’t have default TRS

Objects kind of have rest poses. They’re called “delta transforms”. But they’re a bit of a hidden feature, and it seems like almost no one knows about or uses them. And they also don’t quite work the way one might want, I think.

In any case, this is something the animation module does want to address eventually: give everything rest poses. Moreover, ideally we want to generalize this to the idea reference poses, with the ability to have as many as you like per object. Then one reference pose could be used for deformations, another as the rest pose for animation, etc.

However, this is a long-term goal, and not on the immediate road map.

When baking, we are generating lots of keyframes that lead to heavy files at export.
Any centralized way to reduce size, simplifying curves?

We’ve been talking about coming at this from the other side, by implementing some kind of “smart baking” that tries to use the minimum number of keys to produce the same motion, and intelligently places bezier handles, rather than just blinding keying every frame. But that’s also a longer-term target, and not on the immediate roadmap.

Density > Decimate, but accessible from API with a list of value instead of an fcurve?

If it’s just operating on an array of values, that seems more like a general algorithm to me rather than something that necessarily belongs in Blender’s Python API. Is the main motivation here to avoid the performance hit of a Python implementation?

Any better way to get world matrices of objects and bones, better than going frame by frame during export?

I think in theory the custom deps graph approach you mentioned is what you want here, so that Blender only computes the specific data you care about. But there are also limits to how granular the deps graph can be. And I’m not super familiar with the details, so maybe someone else can provide some insight there?

Any better way to get world matrices of objects and bones, better than going frame by frame during export?

I will look into this in the near future since it has been requested by animators a lot. It’s likely going to be an operator that uses a partial depsgraph to evaluate the object on the given range. I’ll make sure the result is usable through python

Thanks @brecht for your answer.
Here are some remarks, questions, and one more topic:

Should I open a task/todo ticket to not forget it?

Thanks. is_internal() and is_required() is a first good step. Should I open a task/todo ticket to start discussion on is_native() ?

Should I open a task/todo ticket to not forget it? (and tagging Campbell ?)

One more topic:
Is there any plan to manage KTX 2.0 textures in the future?
See KTX Overview - The Khronos Group Inc

Thanks!

Thanks @Cessen for your answer.

General notes: I know that all these discussions are only hypothetic designs, and whatever the decisions, are long term implementations. Don’t worry about that, I will not ask if this will be implemented next week :wink:

Concept of rest positions and animations sound interesting.

I know that delta transforms can be used for rest poses. But really few people know it. I don’t really want to use it on glTF I/O, as we have quite a large community of people using Blender for glTF I/O that are not Blender experienced users.
+1 for having rest poses for everthing (or be part of reference poses)

Thanks @ChristophLendenfeld

Let me know if you need some feedback during your investigations!

1 Like