Good to see this redesign proposal and I can definitely see the shortcomings of the current system and how over engineered it is for the gains you get.
The two different Maya look assignment pipelines that I have worked in the past functioned by simply storing the mapping of stripped down geometry shape names (only the hierarchy info of the shape if they weren’t unique names) and the shading engines assigned to it for replicating the assignments on any geometry (and also any shader overrides on the scene, explained later on) and had very similar philosophies despite growing as two independent mature pipelines. Both stored the shader assignment mapping info as a json(ish) sidecar file next to the maya ascii file that contained the shader networks. The publish of geometry assets required artists to create a selection set of the transform groups that contained the geo. Similarly, for publishing the Shaders assets, we listed the existing selection sets on the scene and they would pick up which selection set they wanted to publish its shaders for. We would then simply go over all the geo shapes under that selection set and store a dictionary mapping of the shading engines assigned to each (i.e., green_shader_SG: plantShape, trunkShape).
On import of the shaders asset (maya ascii + sidecar assignments file) the artist had a choice whether they wanted to (1) assign the shaders only to the current Maya selection (artist would usually just select the top transform groups where the Geometry lived), (2) assign it globally to ALL matching geometries in the scene or (3) import only the shader networks and avoid doing any assignments. The import code imported the Maya ASCII shaders scene (with a unique namespace if desired so they could import it multiple times), parsed the sidecar file, found the equivalent just imported SG’s on the scene (ignoring imported namespaces) and depending on what they chose to assign the shaders to, it assigned the SG’s to the matching geometry shape names (ignoring the imported namespaces again as the same asset could be imported multiple times if desired and share the same shaders asset). This whole workflow was pretty straightforward in terms of UX for artists as right after the geometry (or animated geometry caches) was imported on the scene it was left selected so they only had to import the shaders asset and leave the defaults of Assign to selection
to assign the shaders, very quickly allowing them to assemble their scenes. In our equivalent “Scene Manager” we also had ways to run the shader assignment code by simply selecting the “Shaders” asset and clicking a button “Assign shader(s) to selection” (pretty simple but effective as the same framework expanded to any other common actions you wanted to run on your scene assets). We also had a pipeline of “assembly”/“recipe” assets that would store a combination of all the dependency assets needed to render (geo + shaders + groom…) so they would only need to import that “assembly” and it would bring everything with all the required connections instead of importing the individual assets. This “assembly” pipeline had some other complexities to allow writing of geometry attributes on the static geometry and that would map to the geometry caches on render-time so they wouldn’t need to do a full re-cache of the animated geo if they just needed some extra attributes… but that’s a conversation for another day.
As I mentioned, on this sidecar file we also stored “shader overrides” (Help). This would just be selection sets that contained Arnold shader override attributes (i.e., subdivision, visibility, opaque…). The sets itself + attributes would be stored normally on the Maya ASCII file scene (as empty sets) and the mapping of which nodes were members of the selection set was stored on the same sidecar json file (i.e., “subdivision_set: trunk_GRP, leafs_GRP”) . The import logic of this was like the shader engine assignments: (1) find the equivalent selection sets from the just imported Maya ASCII scene, (2) find all the Maya nodes under the scene selection (or the whole scene if choosing global assignment) matching the names of the selection set entry in the sidecar file (i.e., trunk_GRP, leafs_GRP) and (3) add them as members of the selection set so Arnold would apply them when rendering.
For all this to work it was essential that when exporting the shaders .ma scene and geometry caches to get rid of namespaces (although we still exposed the Strip Namespaces
option, defaulted to True, for the veeeery tiny edge cases where writing namespaces could be necessary) and as much hierarchy name rigidness so the assignments could apply easily to any hierarchy or added namespace. However, a common source of issues when using the node name matching was when riggers would apply deformers to the geometry and Maya annoyingly appended this Deformed
suffix on the Shape nodes, but those were easy to fix and the assignment import code could probably add some extra logic to ignore Deformed
suffixes as well. On the other hand, as part of the validation before publishing shaders we didn’t allow shader face assignments, only shapes… so that simplified the pipeline (no one should be doing shader face assignments anyway).
One thing to mention as well was that the publish of “Shaders” asset was separate from the publish of “Textures” assets. It was expected that lookdev artists would author shaders by using the already ingested and published textures on the AMS using our standalone texture ingest/publisher tool that precomputed all the color correction and mipmapping (using maketx like OP does on the publish of the look). We also provided hooks to the same texture publish tool inside Maya so for the cases where a lookdev artist had created their lookdev without using published textures they could easily do it before publishing the shaders. The tool simply found all the non-AMS-tracked textures on the scene, run the texture conversion as a background process, ingest/publish and replaced all the file/texture nodes with the published textures. This meant that the “Texture” assets were just dependencies of the “Shaders” asset and were imported independently as tracked assets when importing the “Shaders”, allowing artists to continue to update the textures without needing to version up the shaders asset for every change.
This was the “Maya shader pipeline” but I also created a “DCC agnostic” shader pipeline taking advantage of Arnold’s plugins (so it wasn’t render agnostic) with ASS + Operators framework that superseded the Maya pipeline in most of the cases. I haven’t played with OP’s implementation of the use of Arnold Standins but I skimmed through it and I think it uses very similar concepts so ignore me if this all sounds very obvious. On the same publish of the Maya ASCII shaders from Maya that I described earlier, I would create a set of Arnold “Set Parameter” operators behind the back of the artists that did the equivalent of the vanilla Maya shader assignments + set shader overrides by using operator expressions that assigned the operators to the names of the geometry nodes (i.e., “*plantShape”). It then exported all the shader networks + operators network as a single ASS file (so we would have two different “representations”/“component” of the “Shaders” asset, the “ASS” and the “MayaASCII”). This way, we could use this “Shaders” ASS file to work more performantly and plug it directly into the Arnold procedural shape nodes (either .ass, .abc or .usd’s) on the operator input so it would evaluate only on the namespace of the procedural on any of the different Arnold DCC plugins and get very close matching lookdevs across them (as long as they didn’t use DCC specific shaders that didn’t exist on other DCCs, although the cases where that happened we fixed it by injecting the DCC specific shaders into the ARNOLD_PLUGIN_PATH of the other DCC, and discrepancies sometimes due to unit scale differences on the DCC scene). This “representation” also supported assigning the shaders to referenced/imported Alembic caches or Maya geometry but for those cases the ASS file containing the shaders + assignments was either (1) plugged into the Arnold global operator namespace or (2) the contents of the ASS file were imported using Arnold’s python API that translates “ASS” contents to the DCC native nodes (Help). This latter feature was also very powerful so artists could continue to modify the ASS shaders if they wanted on any of the DCCs (even if the shaders were originally authored on a different DCC) and tweak the operators and overrides.
This same pipeline worked in Houdini for export and import as well (and could have worked on any other Arnold plugin but we only used those two) and they were able to do all lookdev and author Arnold shading networks from Houdini if they preferred, make the Houdini vanilla shader assignments as they would and the export/publish pipeline would translate the assignments into operators and put these together on a single .ASS file that was interoperable with the one created in Maya, closing the bridge between lookdeving and lighting in Maya and Houdini.
If I had to redesign these pipelines today I’d definitely first look at the current state of MaterialX and USDShade for storing both the shader networks and assignments so it’s not specific to Arnold. USDShade fully embraces MaterialX and we wouldn’t need to reinvent the wheel and create custom code to do the shader assignments for all the DCCs supporting USD. Luckily for us, Arnold has great support of MaterialX so the same pipeline I described of writing shaders + operator networks would work now almost exactly the same using MaterialX with very little tweaks in the code.
As for your proposal, I think the abstraction you suggest with (2) makes sense and could be done to make the different processors share a lot of the same logic.
(3) and (4) while great quality of life features to have I don’t think they are must haves for an MVP solution of the redesign. In our shaders asset import code in Maya, we would create a selection set (under the selection set of the Shaders asset that linked the asset to the AMS) called “nodes_with_shaders” which would add all the nodes on the scene that had gotten that shader assigned (i.e., geometry StandIns, top transform groups of imported/referenced alembic or maya geometry) so it was pretty easy to check which shader assets were assigned to what geometry assets and make use of it on the code to re-apply shader assignments on cases where a version asset change could require a re-assignment of the shaders. I guess this would cover some of the needs you describe.
As for (5) the pipeline that I described was fit to allow for cross-project assignments but I’m not sure if I’m missing something.