Redesigning OpenPype's Maya Look assignment system

As a part of refactoring Maya’s look assignment system to support Name vs ID based look assignments I’ve been delving into how to improve the look system for both artists and developers.

Here are some things that I have on my list to investigate:

  1. Support name based assignments versus ID based assignments, as per Name vs ID

    • This matches more with how USD shader assignments and materialx intends to work too.
  2. Implement the look system using LookProcessor plugins where each could implement its own assignments based on the attribute and shading engine relationships data. This would mean that the current VRayProxy, aiStandin and basic nodes support would turn into a VRayProxyProcesor + ArnoldStandinProcessor and NodesProcessor which could later be extended to a USDProcessor, etc.

  3. Visualize the assignments a Processor does. This could be a simple log of shader assignments and/or attribute changes it’ll make, preferably in such a way that it can list the edits it’ll make before assigning it too.

    • Including showing the edits defined in the look file that will not be doing anything against your current selection
    • Extra bonus would be able to filter out the Look edits that aren’t actually an edit, e.g. excluding shader assignments that are already assigned currently - however that’d make things more complicated to code.
  4. With 2) implemented add the ability to highlight or identify the nodes in the scene or your selection that will be (or not be) affected by the look processors so you can debug whether it’s influencing the nodes in your scene the way you’d expect.

  5. Support cross-project shader assignments (e.g. on models loaded through Library Loader), as per #2481. I wonder however how well this will hold up in production with paths from other projects, and whether site sync / project dirmapping will understand, etc. This will grow complex quickly with many edge cases. Would it be worth it?


I would love to know which of these might be high on your priority list of feature requests too - or which of these you think are meh and would be redundant. Definitely point out other ideas or propose solutions or different approaches which might make even some of these points obsolete via a much better solution.

Looking forward to your ideas!

2 Likes

I cant speak much about tech behind but from artist pov what matters most imho are these things (some of them present already some not):

  • being able to assign look / multiple looks in single user action
  • being able to easily modify the look and deviating easily from original published look
  • having option to publish variant for the look from particular shot scene so it can be reused /
    shared accros project easily (something like “MyAsset_Look_Sht_050” )
  • bulletproof assigning of the looks for multiple asset classes (Geo,Xgen, ASS, VrayProxy…you name it)
  • pulling textures to the current project / alterning file paths on demand making external files to local
  • sort of viewport LOD for look and high fidelity rendering look without need to do it manually via look variants, sound nice too! or at least to have tools to quickly switch them any time

Thats probably all I can think of now :slight_smile: …there could be more for sure!

Could you elaborate?
How many steps is it now and how many do you imagine?

Dont know if its possible now, but guessing we would want to load a look, then modify it (change some material attributes or even swapping out a texture), then publish that look as a variant?

Something to look out for here would be cyclic dependency, but that is something currently missing in validation for, for example pointcaches as well. Ei. you load in pointcacheA then publish (unintentionally) to pointcacheA so now then updating pointcacheA.

Yeah, nice. Think that would be the same as loading a look, modify and publish it again.

Could you elaborate on this?

Could you elaborate on this?

How do you imagine this working from a user perspective?
We currently have some geometry swapping for example for aiStandIn. Proxy in viewport and swapped at render time. Is it this kinda thing? Render time look swapping?

Thanks for the write up @BigRoy !

All the items on your list is actually part of the requirements for the next iteration of the look assigner :slightly_smiling_face:

Requirements:

  • ID based connections.
  • Path/Expression based connections.
  • Better overview of how looks are connected.
    • Display what is going to happen for user to confirm.
    • Display why a look is not connecting, so the user can troubleshoot without coding.
  • Load looks from other projects (library).

Here’s some additional questions to be discussed:

  1. Should it be scoped beyond just looks? To connections between loaded content, so can be used for connecting caches or xgen/yeti.
  2. Would it be worth looking at package family for lookdev publishing? Each component of a look gets published individually and the package holds the relationship between them.
  3. Use existing frameworks for material assignments? MaterialX/USD.

Some partial answer to above:

  1. Yes, but may not be part of the look assigner scope/ticket, so would need a design planned for beyond the look assigner but maybe just implement for looks for now.
  2. Might be interesting but might too beyond the scope of the look assigner revamp.
  3. We should try and utilize MaterialX as much as possible. In places where MaterialX cannot be used, we could parse the MaterialX data to load in.

Good to see this redesign proposal and I can definitely see the shortcomings of the current system and how over engineered it is for the gains you get.

The two different Maya look assignment pipelines that I have worked in the past functioned by simply storing the mapping of stripped down geometry shape names (only the hierarchy info of the shape if they weren’t unique names) and the shading engines assigned to it for replicating the assignments on any geometry (and also any shader overrides on the scene, explained later on) and had very similar philosophies despite growing as two independent mature pipelines. Both stored the shader assignment mapping info as a json(ish) sidecar file next to the maya ascii file that contained the shader networks. The publish of geometry assets required artists to create a selection set of the transform groups that contained the geo. Similarly, for publishing the Shaders assets, we listed the existing selection sets on the scene and they would pick up which selection set they wanted to publish its shaders for. We would then simply go over all the geo shapes under that selection set and store a dictionary mapping of the shading engines assigned to each (i.e., green_shader_SG: plantShape, trunkShape).

On import of the shaders asset (maya ascii + sidecar assignments file) the artist had a choice whether they wanted to (1) assign the shaders only to the current Maya selection (artist would usually just select the top transform groups where the Geometry lived), (2) assign it globally to ALL matching geometries in the scene or (3) import only the shader networks and avoid doing any assignments. The import code imported the Maya ASCII shaders scene (with a unique namespace if desired so they could import it multiple times), parsed the sidecar file, found the equivalent just imported SG’s on the scene (ignoring imported namespaces) and depending on what they chose to assign the shaders to, it assigned the SG’s to the matching geometry shape names (ignoring the imported namespaces again as the same asset could be imported multiple times if desired and share the same shaders asset). This whole workflow was pretty straightforward in terms of UX for artists as right after the geometry (or animated geometry caches) was imported on the scene it was left selected so they only had to import the shaders asset and leave the defaults of Assign to selection to assign the shaders, very quickly allowing them to assemble their scenes. In our equivalent “Scene Manager” we also had ways to run the shader assignment code by simply selecting the “Shaders” asset and clicking a button “Assign shader(s) to selection” (pretty simple but effective as the same framework expanded to any other common actions you wanted to run on your scene assets). We also had a pipeline of “assembly”/“recipe” assets that would store a combination of all the dependency assets needed to render (geo + shaders + groom…) so they would only need to import that “assembly” and it would bring everything with all the required connections instead of importing the individual assets. This “assembly” pipeline had some other complexities to allow writing of geometry attributes on the static geometry and that would map to the geometry caches on render-time so they wouldn’t need to do a full re-cache of the animated geo if they just needed some extra attributes… but that’s a conversation for another day.

As I mentioned, on this sidecar file we also stored “shader overrides” (Help). This would just be selection sets that contained Arnold shader override attributes (i.e., subdivision, visibility, opaque…). The sets itself + attributes would be stored normally on the Maya ASCII file scene (as empty sets) and the mapping of which nodes were members of the selection set was stored on the same sidecar json file (i.e., “subdivision_set: trunk_GRP, leafs_GRP”) . The import logic of this was like the shader engine assignments: (1) find the equivalent selection sets from the just imported Maya ASCII scene, (2) find all the Maya nodes under the scene selection (or the whole scene if choosing global assignment) matching the names of the selection set entry in the sidecar file (i.e., trunk_GRP, leafs_GRP) and (3) add them as members of the selection set so Arnold would apply them when rendering.

For all this to work it was essential that when exporting the shaders .ma scene and geometry caches to get rid of namespaces (although we still exposed the Strip Namespaces option, defaulted to True, for the veeeery tiny edge cases where writing namespaces could be necessary) and as much hierarchy name rigidness so the assignments could apply easily to any hierarchy or added namespace. However, a common source of issues when using the node name matching was when riggers would apply deformers to the geometry and Maya annoyingly appended this Deformed suffix on the Shape nodes, but those were easy to fix and the assignment import code could probably add some extra logic to ignore Deformed suffixes as well. On the other hand, as part of the validation before publishing shaders we didn’t allow shader face assignments, only shapes… so that simplified the pipeline (no one should be doing shader face assignments anyway).

One thing to mention as well was that the publish of “Shaders” asset was separate from the publish of “Textures” assets. It was expected that lookdev artists would author shaders by using the already ingested and published textures on the AMS using our standalone texture ingest/publisher tool that precomputed all the color correction and mipmapping (using maketx like OP does on the publish of the look). We also provided hooks to the same texture publish tool inside Maya so for the cases where a lookdev artist had created their lookdev without using published textures they could easily do it before publishing the shaders. The tool simply found all the non-AMS-tracked textures on the scene, run the texture conversion as a background process, ingest/publish and replaced all the file/texture nodes with the published textures. This meant that the “Texture” assets were just dependencies of the “Shaders” asset and were imported independently as tracked assets when importing the “Shaders”, allowing artists to continue to update the textures without needing to version up the shaders asset for every change.

This was the “Maya shader pipeline” but I also created a “DCC agnostic” shader pipeline taking advantage of Arnold’s plugins (so it wasn’t render agnostic) with ASS + Operators framework that superseded the Maya pipeline in most of the cases. I haven’t played with OP’s implementation of the use of Arnold Standins but I skimmed through it and I think it uses very similar concepts so ignore me if this all sounds very obvious. On the same publish of the Maya ASCII shaders from Maya that I described earlier, I would create a set of Arnold “Set Parameter” operators behind the back of the artists that did the equivalent of the vanilla Maya shader assignments + set shader overrides by using operator expressions that assigned the operators to the names of the geometry nodes (i.e., “*plantShape”). It then exported all the shader networks + operators network as a single ASS file (so we would have two different “representations”/“component” of the “Shaders” asset, the “ASS” and the “MayaASCII”). This way, we could use this “Shaders” ASS file to work more performantly and plug it directly into the Arnold procedural shape nodes (either .ass, .abc or .usd’s) on the operator input so it would evaluate only on the namespace of the procedural on any of the different Arnold DCC plugins and get very close matching lookdevs across them (as long as they didn’t use DCC specific shaders that didn’t exist on other DCCs, although the cases where that happened we fixed it by injecting the DCC specific shaders into the ARNOLD_PLUGIN_PATH of the other DCC, and discrepancies sometimes due to unit scale differences on the DCC scene). This “representation” also supported assigning the shaders to referenced/imported Alembic caches or Maya geometry but for those cases the ASS file containing the shaders + assignments was either (1) plugged into the Arnold global operator namespace or (2) the contents of the ASS file were imported using Arnold’s python API that translates “ASS” contents to the DCC native nodes (Help). This latter feature was also very powerful so artists could continue to modify the ASS shaders if they wanted on any of the DCCs (even if the shaders were originally authored on a different DCC) and tweak the operators and overrides.

This same pipeline worked in Houdini for export and import as well (and could have worked on any other Arnold plugin but we only used those two) and they were able to do all lookdev and author Arnold shading networks from Houdini if they preferred, make the Houdini vanilla shader assignments as they would and the export/publish pipeline would translate the assignments into operators and put these together on a single .ASS file that was interoperable with the one created in Maya, closing the bridge between lookdeving and lighting in Maya and Houdini.


If I had to redesign these pipelines today I’d definitely first look at the current state of MaterialX and USDShade for storing both the shader networks and assignments so it’s not specific to Arnold. USDShade fully embraces MaterialX and we wouldn’t need to reinvent the wheel and create custom code to do the shader assignments for all the DCCs supporting USD. Luckily for us, Arnold has great support of MaterialX so the same pipeline I described of writing shaders + operator networks would work now almost exactly the same using MaterialX with very little tweaks in the code.


As for your proposal, I think the abstraction you suggest with (2) makes sense and could be done to make the different processors share a lot of the same logic.

(3) and (4) while great quality of life features to have I don’t think they are must haves for an MVP solution of the redesign. In our shaders asset import code in Maya, we would create a selection set (under the selection set of the Shaders asset that linked the asset to the AMS) called “nodes_with_shaders” which would add all the nodes on the scene that had gotten that shader assigned (i.e., geometry StandIns, top transform groups of imported/referenced alembic or maya geometry) so it was pretty easy to check which shader assets were assigned to what geometry assets and make use of it on the code to re-apply shader assignments on cases where a version asset change could require a re-assignment of the shaders. I guess this would cover some of the needs you describe.

As for (5) the pipeline that I described was fit to allow for cross-project assignments but I’m not sure if I’m missing something.

Really great stuff here. Just few points to emphasize few ideas:

With the abstraction, I fully agree with the concept of Processors, but we’ll need to design it carefully so the same principles can be applied in all relevant hosts and renderers but at the same time it could utilize all the specifics of dccs/renderes.

Batch assignment is something I saw already in one pipeline and I like it - with that you can automate loading of assets and at the same time applying shaders, probably something like @fabiaserra mentioned as an assembly.

It should heavily use the link system in AYON - in short, system where you can define typed relations between entities inside.

Question is - and that is general idea we had about it during our internal discussions - if we can have something like “Connection Manager”, something that is managing relationships (and shader ↔ mesh is only one type of relation). Or if it is too abstract and it is better to have one specialised tool to deal with looks only. I think code-wise we can still maintain abstraction level high enough and add different lower level pieces to deal with USD shaders, json metadata files, materialX, Arnold/Renderman specific. So when adding support for some type of host/renderer/(or connection type) we wouldn’t need to rewrite everything.

100 % :slight_smile:

What was the practical experience with this in production?
I would have thought you’d want a lookdev artist to “approve” the textures used, so textures are not published independently from the overall look.

Good points about needing to support Arnold operator graphs!

Do you imagine something like an assembly to be different from loading multiple looks by matching geometry? Or even loading looks through building a workfile from a template?

In an effort to progress the conversation of the redesign of the look assignment system that would work with a generalized “Connector” UI, here is a wireframe design with some user interaction thoughts; Miro | Online Whiteboard for Visual Collaboration

Any comments are welcomed here or on the miro board.

1 Like

We are going to try and move this along with an initial plan and explanations of concepts. The tasks we need to cover should focus on moving away from using cbId for matching looks with meshes, in a gradual transition so current productions can use old and new way of assigning looks.

These tasks are:

  • Path based assignments.
  • Asset relationship alternative.

Path based assignments

  • Storing full node path to transforms, ignoring shape paths.
  • Full node path from publish has to be matching the tail of the candidates for assignments.
  • Potential for future enhancements with partial matching of the node path, all the way down to leaf name matching.

Example:

/robot_GRP/legs_GRP/KneeSphere_L_GEO

Valid:

/_GRP/_grp_final /robot_GRP/legs_GRP/KneeSphere_L_GEO

/_grp_final /robot_GRP/legs_GRP/KneeSphere_L_GEO

/robot_GRP/legs_GRP/KneeSphere_L_GEO

Invalid:

/legs_GRP/KneeSphere_L_GEO

/fx_GRP/KneeSphere_L_GEO

We would replace the existing Assign menu item in the Look Assigner with two separate menu items; Assign by id and Assign by path, so the user can choose which method to try depending on the production.

In order to stay a bit more future proof, we should aim to have the matching algorithm swappable, so we can later introduce more matching methods.

Asset relationship alternative
Currently finding the looks of interest for a mesh is done through looking up the cbId which contains asset id information. As an alternative to this, we would like to propose using input links to find looks. These input links are already starting to get relied upon in AYON, so they are a good fit finding matching looks as well.

Since links can become quite complex, we have mapped out what it can look like on the Miro board; Miro | Online Whiteboard for Visual Collaboration under Asset Relationship Exploration > Link based matching.

The links will get established when publishing anything and will show a dependency between versions.
When publishing looks specifically we would also traverse the input graph to the root, often a model or several models, and establish a direct link from the model(s) to the look. This helps to find branches of looks to present to the user.

When it comes to finding the looks to present to the user, we would traverse through the input links. A breakdown of the link traversal can be found on the Miro board under Asset Relationship Exploration > Link Traversal.

Tasks Breakdown

  • Path based assignments
    • Ensure paths are stored correct to the transform level.
    • Path based matching function
    • Separate “Assign” menu item into “Assign by id” and “Assign by path”
  • Asset Relationship
    • Ensure links are established when publishing. Link to root of input chain.
    • Link traversal
    • Present looks in Look Assigner from links

Further development

This should put us in a good position to look at introducing the MaterialX file format and then move onto designing the frontend for a more universal Connector UI.

1 Like

Yes. This will also make it easier actually with how Houdini can actually export attributes to alembic (since it can’t export a separate attribute to the shape AND transform via Alembic; from Solaris you can though via USD)

Sounds solid - especially with a good visualizer to see what’s being identified and what matches what.

Asset ids on meshes versus querying input Links of published for Asset

I don’t think just the Links will suffice from a database perspective from “representation” to “asset” unless you’re storing an asset link per full geometry path for all publishes, everywhere. (Plus you can actually also track mesh X came from mesh Y in input, etc.) Because otherwise you cannot detect the input link correctly - we have many rigs that have many characters, we have many pointcaches that contain multiple assets, we have FX caches that contain many assets. There’s so many cases where a simple link from the representation doesn’t suffice. We need to know about the individual entry within that data.

If with that you want to rely on from any point in time to ‘detect’ a link to the original geometry.

  1. A single publish can contain geometry from multiple assets.
  2. Additionally there’s definitely also merit in being able to duplicate or import a mesh and still be able to assign a shader to it. Not a full requirement, but we’ve definitely relied on it every now and then - especially when sandboxing or pitching a project where we need to move quickly and get e.g. a shot out in a day from start to finish to win a pitch.
  3. It’ll make it near impossible for a procedurally generated mesh in FX (which might not detect an input link to the original asset if not used as input) to “fake” being a particular asset’s mesh. We do that all the time → let this broken thing behave as floor from the environment so shader assignments work.
  4. What happens if I created a model somehow based on another model - how do I know at what link to stop? At the “model” publish? But what if my initial model was a “pointcache” to begin with from FX.

I’d still recommend that as soon as you know what asset a model is related to, to then imprint it as such.

It’s much easier (but especially much much faster!) to read a value and find the right upstream link then doing a query for 1000s of meshes across the database upwards across multiple levels, which for a production scenario could easily go from model → rig → pointcache > corrective_fx > fx_dynamics > lighting scene. I’d argue it’s also easier to debug. It also makes it much easier to take a mesh and say ‘let this mesh behave as that asset’s mesh’. There are so many cases this will be helpful.
It doesn’t even have to be per mesh or transform but could be a particular group marking “this is the top group from which you’ll want to parse the hierarchy as if its the root of asset Y”. Actually, with path based assignments you’ll likely even need to be marking the root in the caches from which you’ll need to parse, right?

1 Like

Thanks! We’ll workshop these use cases and see what we can come up with.

We want to store compatible link type between versions, not repres and assets

As far as they have been created from pre-published models in the first place, you shouldn’t have a problem. We’ve modeled a case where you have for instance multiple model products going into a single rig and then shade the rig. Links still cover that.

Good point, supported with links.

This is a tricky one, if you have a mesh in the scene that is not containerized (for instance after duplication) then we could easily assign a look, but not really find the compatible looks in the first place. On the other hand, you could always just manually find the correct look and use the assigner just for the application of it to selected models.

But at that point it’s just a single mesh out of a bigger products isn’t it? We’re not trying to keep info on each model, but rather per product version. So you should still know what was the original non-fx product equivalent to what you’re doing. That pointcache also goes down to model somewhere, or might evevn have it’s own compatible link. The FX cache merely goes up the network and presents them all.

That being said, we’ll certainly need to also have a mode where you show all the looks from current shot or asset, or linked ones. To make it easier to work around these edge cases.\

Listing looks based on what asset a model belongs too is also quite limiting and we’re hitting that limitation regularly. It doesn’t allow for shot overrides, it doesn’t allow for easy cross asset look sharing,

That being said, with links and graphql you can easily find what folder (in AYON terms) the product is associated with. Simply by one or two traversals up the link network.

Not really if you’re assigning from the tail end of the path. The root and amount of times a model might be grouped becomes irrelevant then.

All in all, solid use cases! Some we’ve considered already, other we’re adding on the list. Thanks.

How do they? That’d only work if you know which paths belong to which link, right?

rig
    hero
        model_GRP
            body_GEO
    enemy
        model_GRP
            body_GEO
    ninja
        model_GRP
            body_GEO

Say I have a single rig version that is indeed linked to Hero, Enemy, Ninja - the models are embedded (they aren’t maintained references, etc.) So all we have is just a rig file that happens to have 3 input links of separate assets. Now which of the body_GEO entries belong to which?

We’ll need to know exactly which path belongs to which asset. Right?

Unless we’re now somehow enforcing uniqueness to begin with, like namespaces, an enforced top group name, etc. But technically each of those “solutions” would still be marking it in a way, whether it’s attributes or not.

This is a tricky one, if you have a mesh in the scene that is not containerized (for instance after duplication) then we could easily assign a look, but not really find the compatible looks in the first place. On the other hand, you could always just manually find the correct look and use the assigner just for the application of it to selected models.

It’s sad seeing this use case gone, but there might be opportunities for different workarounds - very often we’ve just ended up duplicating, deforming and whatever for a pitch to get things out quick by a single artist.

But at that point it’s just a single mesh out of a bigger products isn’t it? We’re not trying to keep info on each model, but rather per product version. So you should still know what was the original non-fx product equivalent to what you’re doing. That pointcache also goes down to model somewhere, or might evevn have it’s own compatible link. The FX cache merely goes up the network and presents them all.

It could be meshes from two assets in a single FX (e.g. colliding) publish. You’d still need to know which geometry path belongs to which asset.

This shows that for the example case laid out above I wouldn’t know which shader to assign to which body_GEO.

I actually see this in the reverse way. We don’t need to search upstream all the time, but only when we publish a shot based look then that either gets added as a subset to the orignal asset OR creates a dedicated link to the asset “hey this look is available” to explicitly register it being available then going through upstream links for every query, each time.

The current lookdev system is already slow enough. :smiley:

Ouch, you actually have this in production? Quite frankly, we’ve never considered this level of mixing and it’s actually something that would break in all assignment approaches apart from the explicit IDs we have now. However if you do keep the information that ninja and everything below belongs to ninja asset then you’re still able to assign correctly.

Of course if you start mixing, then IDs are probably your only option. That’s kind of the whole point of path based assignments, you get some, you loose some.

btw keeping the links to all original models here is the easy part, provided you at least load them into the scene using the loader (even without reference). If you don’t then you’re back to ID based assignments. Sounds like in your case you actually depend on them in many places. I have to say that we’ve never had a single request that would actually be for these mixed cases. On the contrary we had a lot of requests to be able to “fake” the original hierarchy from different objects to ensure assignments.

I don’t see a reason for it to be gone completely. It’s totally viable to keep IDs as an option for super explicit matching that a studio could opt to use. These systems are not mutually exclusive. We’re just trying to find a default system that is more universal and can be reused in other DCCs other than maya. Doesn’t mean we can’t get best of both worlds

Isn’t it reasonable to assume some manual assignment for these case? How far should be take the complexity of the full system to support a few edge cases, that probably require reshading anyways.

Correct, you’d need to structure your rig differently.

That’s exactly what we’re planning with the compatible link. When you create a shot look override on a cache, we can find it’s highest model ancestors (or multiple if it’s made of many) and mark them as compatible with the new look.

Not much episodic and feature work on your side I assume. This becomes unviable as the shot number grows and you start pushing new products into an asset even though they belong to a shot. Not a problem with 50 shots (questionably), but a major problem with a 1000

If we were unable to actually use the links across the board in the pipeline and not able to do so efficiently, we might as well not have them. It is one of the most requested features across the board and we will utilize it as much as reasonable. Btw using links for the discovery also allows you to potentially also manually assign looks to models and caches, without republishing. For example if you make a tree with procedural shading and you publish that look. You could then publish 20 more tree models and just mark the first look as compatible with all of them. rather than relying on just what is stored in the meshes metadata itself.

Long story short, the discovery and compatibility is not a trivial task and we’ll do our best to keep compatibility with existing workflows, On the other had what we’re doing now only support a subset of workflows and we need to make it more flexible.

Forgot to mention that the performance is of course paramount and before commiting to this we’ll do extensive performance testing on the querres

Well, yes. We have a project right now where a single rig basically has 21 characters that can be switches/toggled and mixed and matched. :wink:

But this exact same example also describes the FX workflow I mentioned, or just creating a “pointcache” from a maya scene for a few props.

The issue is exactly the same with ANY publish that mixes more than a single source asset. A pointcache containing a simulation of a prop hat and simulation of a character’s coat (like prop_hat, character_hero assets) would result in the same issue. They can never end up in the same publish for look assignment to still work.

Admittedly - they might have not become “edge cases” on our end. Whether that’s for the good or the bad I don’t know. We’ve also had cases where we’d duplicate arms/legs or whatever is in motion to generate fake motion trails with extra deformation. All stuff that’d technically not work with Path based assignments since you’d have more paths than you initially started with - haha. No idea how that could be approached better.

Great - so register at publish time instead or traversing crazily upwards on querying time. :+1:

Totally agree that it’ll be a “win some” and “lose some” situation, however at this stage I’d say that about 60% of our publishes and likely 90% of pointcache publishes on our end contain more than a single source asset of geometry. Lots of environments, or “collection of animated assets that are not rigs”, or actual rigs featuring multiple characters or props, or FX generating content for multiple assets (e.g. demolishing multiple buildings, etc.) That’s likely why my initial reaction to seeing all of those use cases break is a bit of a shock if there’s no easy way to solve those down the line.

The same situation would occur publishing e.g. an “environment” with a collection of assets. As soon as that environment starts to become a single publish, a single file with multiple paths you have the exact same case I described above. Lots of paths in a single version that you’ll now need to identify “hey, what’s the source of this particular ‘geometry’ path”. So you’ll need to start somehow linking path → source links.

Anyway, looking forward to prototypes and learning about all the good things we’ll gain. Losing cbId management is also a big plus, especially for newcomers. We’ve used the system so long we’ve got to know it’s flaws and learned to work with it. I hope we can grow as comfortable with path based assignments and have it improve our workflow even more.

I’d say that there is a large value in keeping the ID as some kind of option to be honest, so hopefully we’ll get to something that accommodates both types of work.

You’ll still need to traverse to find the ancestors of a cache for example if you want to present all compatible looks. No way around that other than storing that info on meshes as we do now. And that info get lost…a lot…especially with fx in most studios.

Nevertheless, traversing an obscene number of links with graphQL is actually quite fast, it is exactly what it was made for.