Ok, so here is one of my use cases:
Context
I work at a medium sized studio (currently around 70 people) that has a pipeline centered around maya(arnold)-nuke, and I am basically the only one of two people I know there that uses blender, at least on a regular basis.
We work in various tv series, so time and budget constraints are basically a law of nature, and in that scenario the studio cannot afford to take time from technicians and devs to deal with a software that is used marginally when there are lots of fires to put out already within the main pipeline environment. I imagine that the situation may not be some weird exception, given the gradual adoption Blender is experimenting throughout similar small-medium sized studios. What I meant is that I am basically working with what Blender offers out of the box, and some 3rd party addons, but no custom tools developed in-house.
Workflow example:
So, our renders coming out from Maya and into Nuke:
We usually separate the renders in 2 to 4 layers, depending the complexity of the shot. The composition usually follows this logic:
It is oversimplified a lot for the sake of clarity, but the green columns would be cryptomatte selection nodes (reading from the multilayer exr at the top) some of them with a selection already created in the preconfigured templates for the scene. Things like hair, shadows, AO, BG, characters, etc. all require different render settings regarding sampling, frame numbers, matte objects, shadow catching and casting, so they are renderered individually. Pretty universal basic stuff, really.
Besides that, often times there are more than one char layer when there are many of them in different terms of depth. One of the reasons is so the comp department can have better access to clean cryptomatte selection (to avoid edge artifacts that usually appear if some heavy compositing is done, due to AA differences in the borders between overlapping objects) , but also for scene optimisation. Whatever the reason, the cryptos have to be split too.
As you can see, such a workflow wouldn’t be feasible nowadays in blender, because you would need to have the renders separated via the compositor (which would mess with the metadata), and having it all inside a single .exr with A LOT of layers and passes would be a no go, for many reasons:if some layer fails all has to be rerendered, having to work with such all content of massive files every frame would kill the comp artists if they only wanted to preview the one layer, etc.
Another case:
Another problem is the incoherence between industry standard cryptomatte implementation and Blender’s. I have dealt with it several times when doing mattepainting work (for which I normally use Blender), because I have to provide a structure as close as possible to the one that Maya creates, so the comp department can work efficiently within a single comp file.
In the case of cryptos, it becomes an issue due to the compositor not recognising metadata. So I have to rerender manually (from the output panel) the minimum number of layers that I think we’ll need with cryptomatte, and create some custom masks if I need something else. In the image above, the matte input could be done in Blender, and maybe even the BG and FG if, for a particularly special shot, they are treated as a mattepainting. But chars would almost always rendered in Arnold, because they are integrated in the pipeline.
And, in terms of cryptomatte, it is kind of frustrating that being a industry standard, the implementation differs not only to that of maya in terms of . vs _, but within blender itself depending on from which panel the file gets outputted. Because all of that, I usually end up creating custom masks with emission shaders manually, but it can be a pain, honestly.