Seamless OP integration in Blender

Goal

Allow to use Blender’s import tools (Append, Link, Asset Browser) without messing with OP compatibility.

This would work only for Blender files interactions

State of the art

Blender workflow with OP is currently deeply refactored following a proposal we’ve submitted few months ago: #3171

When testing this workflow for our production, we face several issues because the instances are managed by collections. This design generates a big issue: it is very tedious to manage non-outliner datablocks (NOD) like materials, node groups (geometry nodes, compositing groups…), textures…
We’ve started to handle materials for the look family with collections, but when loading it into a workfile, it creates an empty collection that holds the data, which is disturbing for the user. If we extend this design to other NODs, it’ll become very messy.
Also, since the Asset Browser allows to Mark as Asset almost any kind of datablock, we cannot take advantage of it for materials, which is very sad.

Proposal

Metadata management

During extraction, some metadata are added to the datablock, the ones we know (asset name, subset name, parent, family…) but some are handled by dedicated behaviours.

Representation ID set after asset integration

Using a subprocess and a script, we update the integrated blend files with the representation ObjectId after the doc has been created in the DB, with a class IntegrateBlenderAsset(IntegrateAsset). Which makes the Asset Browser usable for datablocks because the metadata required by OP is stored into the published file and not set to the instance when loading it.

Loader type determined by handler

Determine the OP loader with a handler and make the avalon content compliant with the scene inventory requirements.

List of publishable instances held in Blender Scene

Every scene where an instance is created, will hold a CollectionProperty called ‘op_instances’, a list of created instances and which will be listed in publish.

UI panel for managing publishable instances

Creating a panel integrated in Blender to create and manage these created instances seems the best option to us. It’ll be faster, more consistent, and won’t generate Qt crashes Blender can encounter. We can associate a family to each datablock type to create the appropriate instance.

Renaming datablock at instance creation

The main constraint for the user will be that the datablock for which an instance is created will be renamed by the OP naming convention. This is to guarantee consistency across all users’ workfiles and publications.

Auto clear instances list at save

Because a datablock for which a publish instance exists can be deleted, it would generate an inconsistency between the instances list and the actual existing datablocks. We propose to test if a datablock targeted by an OP instance still exists in the scene and delete the instance if not, processed by a handler run before saving.

What about compatibility with other softwares?

We are here talking only about using Blender features to import from other blend files. The feature we want to deal with don’t work for other file types.
For cross DCCs formats, the Load will set the required metadata, the same way it currently works.

Hi, few notes.
We’ll replace Mongo ids with something else since v4. Also from point of publishing it seems useless. Why to create ObjectId before publishing? Only place during publishing which should care about ids creation is IntegrateAsset, or I’m missing something?

With new creation/publishing system the knowledge of “where and how” to find creted instances is up to creators. The UI for it is different then current creator and publish UI and create logic has changed completely. I would maybe focus on offloading UIs to different process which communicate with blender rather then writing UI inside blender as they change over time.

Thanks for your feedback.

When this refactor will land to v3 approximately?
Before datablocks are extracted and saved before integration, if we want to be able to match related representations, we need to create ObjectId before publishing to add it to the datablock metadata. This way, Load step doesn’t have to add any metadata.

I’m curious about your new creation/publish system, but we see a very high gain to integrate logic into Blender’s UI. For Blender users, it is much more efficient. At least, both systems can coexist, the OP’s one being the master.
Furthermore, the UI we propose is a bit different as how the current Creator works, because it can deal with NODs, which the current system will never be able to.

Before datablocks are extracted and saved before integration, if we want to be able to match related representations, we need to create ObjectId before publishing to add it to the datablock metadata.

I don’t know blender workflow so I’m not sure I understand. How ObjectId would help with that before integration? Before IntegrateAsset you can’t tell if the integration will happen or not (or how…). Publishing is publishing and loading is loading. Loader adds metadata so you can handle versioning of representations and should contain id of integrated representation (after integration).

I’m curious about your new creation/publish system, but we see a very high gain to integrate logic into Blender’s UI. For Blender users, it is much more efficient. At least, both systems can coexist, the OP’s one being the master.

Publisher contains both Creation and Publishing part. Creators are not just storing data to scene but are also responsible for their update, removement, collection of instances, adding additional attributes related to created instances and more.

because it can deal with NODs, which the current system will never be able to.

Don’t know what NODs is?

When this refactor will land to v3 approximately?

Which refactor? New create/publish workflow is host specific change and require to rewrite creators (and publish plugins) so we’re doing it host by host.

Loader adds metadata so you can handle versioning of representations and should contain id of integrated representation (after integration).

Maybe I wasn’t explicit enough in my design proposal. This is exactly the topic. To have the opportunity to use Blender’s native tools for importing blend datablocks (like the asset browser), we need to be able to match imported datablocks with their related instances in OP. This implies to not rely on Load metadata and only on published datablocks.
The proposal is to create the ObjectId earlier and use it in the integration for the representation creation (this system is not mandatory, if the ObjectId hasn’t been created earlier, one is created). The id being added in published metadata of the datablock at publishing, it is easy to match it with the OP representation.

Don’t know what NODs is?

Non-outliner datablocks, as explained in the proposal description.

we’re doing it host by host.

ok nice! This is what we could call a hosts API refactor IMO, that why I was using this term. Do you have a schedule for Blender?

The proposal is to create the ObjectId earlier and use it in the integration for the representation creation (this system is not mandatory, if the ObjectId hasn’t been created earlier, one is created). The id being added in published metadata of the datablock at publishing, it is easy to match it with the OP representation.

I’m sorry, I’m not familiar with blender (or 3D), I’m looking into it with unenlightened coder’s view, because it looks like nobody else care. And I still don’t understand why would you need representation id before integration? If integration does not happen what would you do with the metadata? You can (and should) “Load” representation (or metadata) after integration when you’re 100% sure the representations and their files were integrated. Why that cannot happen in publish integrator plugin which is processed after IntegrateAsset so you can just reuse the id created during integrate new? I don’t like the idea that we would support to define representation id outside of IntegrateNew plugin.

This is my point of view and I can be wrong.

If integration does not happen what would you do with the metadata?

The published file would not be available then, and the metadata will be deleted with the temp extracted file. Only the next successful integration will be used in asset browser.

Why that cannot happen in publish integrator plugin which is processed after IntegrateAsset so you can just reuse the id created during integrate new?

In fact, this solution would be cleaner indeed. The fact is, it’s hard to modify already extracted files because we are not anymore in a Blender context and doesn’t seem to be a good practice (we can do it carefully though), but I might have missed something. I found more optimized to add the ObjectId during the Extract process because opening several published files might slows the integration process down more than doing it with the currently opened file. That being said, I haven’t benchmarked it and it might not be that heavy to process.

BTW I’m not sure for what would be the ObjectId used for? For OP “load compatibility” you need more information e.g. loader name to be able handle update/remove operations, so when would you use it?

Of course the loader name is a topic and would be set at publishing with a default setting or with a Blender handler that will check the current import type of the datablock matches the loader name.

My main goal is to make the great Asset Browser feature fully compatible with OP, which is impossible with the limitations of the current system and can be improved with small adjustments.

Okay, I’ve found a solution:

  • Update integrated blend files with the representation ObjectId after the doc has been created in the DB. (Done using a subprocess and a script)
  • Determine the OP loader with a handler and make the avalon content compliant with the scene inventory requirements.

I’ll update the description message.

I have a question regarding the correct merging workflow into OP.
This work I’m currently working on is based on #3463 and I was planning to integrate this work into this very first PR about Blender’s workflow refactor because it shapes how the workflow will look and will be capable of using Blender as main (or single) DCC.

Do you prefer we merge this work into #3463 or in a second PR?

#3463 is at this moment outdated e.g. doesn’t work with new integrator, etc. . So I think in this case you can merge them together.

@Tilix4 As mentioned on the PR I’d love to know more about the current state of this development. I feel it’s a major push (but also major change!) on the Blender integration and I’d love to play with it, preferably in the new refactored way before I get into maybe adding my own changes.

I feel like this massive PR is also holding off the conversion of the blender publisher to the new publisher since it touches most of the same required code areas.

@iLLiCiT what’s the view from Ynput team on this PR? Should we put in a joined effort into finalizing this PR, e.g. aiding with testing and documentation, etc?

The PR is not in a state to be approved or merged. The main reason, from my point of view, is added option to create/set representation id before integration plugin. The reason was reported to @Tilix4, who replied that he has different solution which does not use that option. But the different solution was not yet proposed (or I didn’t see it).

There are also other issues, like sync server plugins inside blender, import of publish plugin inside other plugin, etc. At this moment is the PR, or branch, outdated. From my point of view the PR should be closed and opened again with only related changes to blender, because right now it changes a lot other stuff and I’m quite sure they’re not needed. The other changes makes it really complicated because of review. Even if blender part would be Ok, we wouldn’t be able to approve it, because of unrelated changes.

As far as I know, we don’t have any available work hours for Blender at this moment, so we can only review.

Hi, I feel very sorry for taking so long to finish it, but you know how production can smash our expectations…

In fact, the last major change I’d like to merge before saying it is ready for test, is a refactor of the loader plugin system for an homogeneous and clean design. This refactor has already been started and shouldn’t take too long (at least I hope). That being said, if you want to play around without creating loaders, please go and feel free to give us feedback!

In fact I did find a way to avoid the creation/set of representation id before integration plugin but did not take the time to clean it… Bad me.

Also, I’d very much appreciate an overview list of what bothers you with the changes on files you think are not needed, because currently we tried to put in this PR only what is relevant, and we may have missed things.

Technical question: is it a problem we refine this PR instead of creating a new one? I think it is better to keep history of development in the current PR.

Great to hear about the progress. I think it would be possible to clean it out and refine within the same PR. As far as the end result is clean, the history doesn’t matter apart from keeping it for posterity of course with the whole genesis of the development.

We’ll have a look at listing out the points that are problematic to make is clear. I’d very much like to get to a mergeable point, even though it might also be a good candidate for a first community addon for ayon. which would allow making it into a separate repo with it’s own maintainer and studios could simply pick which approach they prefer in Blender.

Nevertheless let’s have a look at what could be done to get this across the finish line.

Ooooh we understand more that you might imagine :slight_smile: don’t worry about it.