AYON / Openpype Publish process - Development guide

Introduction

Walking through this guide to learn about how a new product-type support is added to AYON/OP.

Publish plugins are DCC specific but they follow the same structure.
I will give examples from Houdini because I don’t like using arbitrary examples.
So, consider finding the respective methods in other DCCs

Publish Process is built on top of pyblish where:

  1. you mark a group of data with special marks using a creator tool
  2. continue work as usual
  3. on publishing, the publish tool will
    a. look for those groups with special marks and list them as publish instances
    b. run some checks to validate these data
    c. export these groups if passed checks
    d. run integrator tool if export was successful
  4. integrator tool will move and rename exported data
  5. post integrating plugins

Visualization:

Key concepts

What is publishing?

In simple words, it’s about exporting your work and share it with colleagues.
However, one key component in the publish process is the validation process so that errors can be caught early before exporting and sharing any work, e.g.

The image pyblish component with AYON wrappers and other top level functions make AYON more special and powerful as a pipeline management software.

This wiki is worth reading which tells how the idea of pyblish started.
What is publishing · pyblish/pyblish Wiki · GitHub

Publish instances and Products

In order for AYON to publish your work, you would need to specify three things:

  1. your work, e.g. by selecting it
  2. the product-type of your work, e.g. a Model
  3. the name that will be given to your published work (Variant name)

Then, AYON will use these inputs and create you a publish instance

So, each publish instance must be associated to only one product-type
This is very helpful during validation process, as each publish instance will run through different validations related to its product-type.

Each publish instance contains a lot of data related to your work, e.g.

  • asset : Asset name
  • task
  • family : product-type
  • variant
  • subset : which in this case is family + variant

These data are saved as attributes on your work itself!

Recalling, you mark a group of data with special marks using a creator tool

Product-Types Vs Representations

It’s super important to differentiate between product-types and representations when dealing with AYON/OpenPype, and in short:

  • Product-Type: A product containing a specific type of information, e.g.
    • pointcache/animation: a character animation output as cache of its geometry only (no controls, no bones; just geometry cached)
    • camera: a single camera
    • model: a static (clean) mesh adhering to studio rules like naming conventions, poly-flow, usually intended to be used as the clean geometry representing an asset.
  • Representation: A file output for a product type.
    • e.g. the data of pointcache product-type could be stored in any format supporting geometrical caches, like Alembic, USD, bgeo, etc.
    • e.g. the data of camera product-type could be stored in any formatting supporting a single animated camera, e.g. Alembic, Maya scene, USD, etc.

Note how representation is just a different file format for the same product type - as such, a single product type could have multiple representations which should technically (for as far as the file formats allow) contain all the data of that product-type.

As such, another example (that might currently not exist) could be:

  • A sketetal animation (with blendshapes) product type could be stored in multiple representations that support that, e.g. FBX supports skeletal animation, USD supports skeletal animations, GLTF supports skeletal animation.

I had a misconception when I was thinking of implementing FBX product type, which didn’t make sense because FBX can be a representation of various product types such as (rigs, geometry, (basic) materials, and even textures)
Then, I learnt that my implementation was actually a geometry product-type which would be represented as .fbx then the whole thing turned into an Unreal Static Mesh product-type.

Find the full story here Unreal Static Mesh PR

Also, Here are some visual examples from Houdini:
example 1: Same product-types, different representations

  • pointcache product-type in Houdini has two representations alembic and Bgeo,
    Each has its own ROP export node, so they are implemented as two different product types exporting the same data.
    product-types Alembic Bgeo

example 2: Different product-types, same representation.

  • Both pointcaches and cameras can have the same representation *e.g. saved as .abc
    However they are two distinctive product-types, each have their own validations!
    product-types Alembic Pointcaches Alembic Cameras

example 3: Same representation, different export options.

This has nothing to do with the product-type

In some cases a representation can support different extensions, e.g.

  • USD supports .usd, .usdc and .usda

Another example:

  • Bgeo representation in Houdini has many export options, but they are still the same representation

image

Here’s a proof, where I can swap asset version represented by different bgeo export options without any issues!

Context Vs Instances

Each DCC session/workfile has a pyblish context , each publish product is a pyblish instance

Each context denotes ( workfile path, project name, asset name, task name) and many more.
Each instance denotes a particular product-type.

from openpype.pipeline import registered_host
from openpype.pipeline.create import CreateContext

host = registered_host()
context = CreateContext(host)

print( 
  "Project '{}', Asset '{}'"
  .format(
    context.project_name, context.get_current_asset_name()
  )
)

for instance in context.instances:
    print("Instance: {}".format(instance.name))
    # you can edit instances here
    # if some_condition:
    #   instance.data[key] = value
    pass

# To save change to context and instances
# context.save_changes()

Publish plugins can be configured to work at different levels i.e. context level and instance level
for example:

  • A validator to check some data of a particular product type, then this validator must inherit pyblish.api.InstancePlugin and product-type keyword should be specified, example Houdini Alembic product type validator

  • A collector that collects data regarding the DCC session itself then you should use
    pyblish.api.ContextPlugin , example context plugin: collect instances

Create

Creates a group or export/write node for the selected objects if desired

In Maya : it creates a Set
In Houdini : it creates a ROP node
In Nuke : it creates a Write node

It’s not saved as instance of a class however it’s just a group or a node that marked with some extra parameters

Creator class structure

Toke Stuart : To generate a new product type (family), you’d need to implement a creator for the new product type (family). If you follow the structure of the other creators, the new product type (family) instance should get picked up when publishing.

In Houdini
it’s required to inherit plugin.HoudiniCreator

  • Class Attributes
    • identifier
    • label
    • family
    • icon
    • default_variant (optional)
    • default_variants (optional)
  • Class Methods
    • create
      • set node type
      • create instance
      • get parms
      • set parms
      • Lock parameters if needed
    • get_pre_create_attr_defs (optional)
    • get_network_categories (optional, Houdini specific)
    • get_dynamic_data (optional)

In Maya
it’s required to inherit plugin.MayaCreator
in many cases you would rely on MayaCreator.create()
so, you would only need to set class attributes

  • Class Attributes
    • identifier
    • label
    • family
    • icon
    • default_variant (optional)
    • default_variants (optional)
  • Class Methods
    • get_pre_create_attr_defs (optional)
    • get_dynamic_data (optional)

get_pre_create_attr_defs

Add settings for publish node
image

get_dynamic_data

You’d need in some cases to have a subset name template instead of the default product-type subset name,

image

It’s a two step process

  1. Implement get_dynamic_data
  2. update Settings
  • In OpenPype project_settings/global/tools/creator/subset_name_profiles
    image

  • In AYON Studio settings/ Core/Tools/Creator/Product name profiles

Project Settings

Respective creator’s settings are fetched and applied to creator class automatically.
for example: Houdini CreateArnoldAss
ext , default_variants class attributes will be overridden by their values in settings automatically.

Collect

Collectors act as a pre process for the validation stage.
It is used mainly to update instance.data

Collector class structure

It’s required to inherit pyblish.api.InstancePlugin or pyblish.api.ContextPlugin

  • Class Attributes
    • hosts
    • families
    • label
    • order
    • enable (optional)
  • Class Methods
    • process
    • get_attribute_defs (optional, it is associated with OpenPypePyblishPluginMixin or OptionalPyblishPluginMixin )

Attribute Defs

It’s similar to get_pre_create_attr_defs which adds user accessible attributes in publisher UI.
it’s done by OpenPypePyblishPluginMixin or OptionalPyblishPluginMixin secondary inheritance
It requires to call self.get_attr_values_from_data in process to get these attributes values

Enable and disable Collectors

The minimal setup is to add enable class attribute and "enable" key in the collector’s respective settings, Example: Search OP repo for CleanUpFarm

It’s possible to make collectors optional, Check Optional Validators

Collector Project Settings

Collector class attributes will be overridden automatically by their respective values in the collector’s settings.
To get settings of other plugins inside your collector, Jump to Get Settings

Validate

Validators are used to verify the work of artists,
by running some checks which automates the approval process.

Validator class structure

It’s required to inherit pyblish.api.InstancePlugin or pyblish.api.ContextPlugin

  • Class Attributes
    • hosts
    • families
    • label
    • optional (optional)
    • order (optional)
    • actions (optional)
  • Class Methods
    • process
    • get_invalid (optional, it is associated with SelectInvalidAction)
    • repair (optional, it is associated with RepairAction)

get_invalid should return None if no problems
Otherwise, it should return the node associated with the problem
process makes use of get_invalid

Validation Error Types

PublishValidationError

A basic error report display.
Commonly used arguments message and title

PublishXmlValidationError

An advanced error report display which loads an xml file
Commonly used arguments

  • plugin which can be passed as self
  • message which is the same as PublishValidationError
  • formatting_data which is a dictionary of data that map to curly brackets variables in xml

xml file must saved in publish/help directory and it must have the same name as the validator file name (just replace .py with .xml)

Optional Validators

It’s done by OptionalPyblishPluginMixin secondary inheritance
it requires adding optional class attribute and add new project settings
for your validator.

you can use "template_data" which offers 3 keys ( "enabled" , "optional" , "active" )

...
"template_data"[
          {
              "key": "Validator class name",
              "label": "Validator Label"
          },
...

Alternatively, you can use the minimal setup like Enable and disable Collectors

Use existing actions

Implemented actions can be found in {host}/api/action.py
you can use them by importing them to your script
e.g.

from openpype.hosts.houdini.api.action import (
    SelectInvalidAction,
    SelectROPAction,
)

Create new Actions

Each validation can have a single repair action which calls repair method
to create a repair action you only make a class that inherits RepairAction class
then implement your action in repair method

Also, you can create as many actions as you want in {host}/api/action.py
follow the structure of other actions.

Validator Project Settings

Validator class attributes will be overridden automatically by their respective values in the validator’s settings.
To get settings of other plugins inside your validator, Jump to Get Settings

Extract

Extractors are used to generate output and update representation dictionary

Extract class structure

It’s required to inherit publish.Extractor

  • Class Attributes
    • hosts
    • families
    • label
    • order
  • Class Methods
    • process

Extract Logic

  • get rop node
  • render rop
  • get required data
  • update instance data

About Representations

Requires:
    instance.data['representations'] - must be a list and each member
    must be a dictionary with following data:
        'files': list of filenames for sequence, string for single file.
                 Only the filename is allowed, without the folder path.
        'stagingDir': "path/to/folder/with/files"
        'name': representation name (usually the same as extension)
        'ext': file extension
    optional data
        "frameStart"
        "frameEnd"
        'fps'
        "data": additional metadata for each representation.

Optional extractors

Similar to Optional Validators

Extractor Project Settings

Extractor class attributes will be overridden automatically by their respective values in the extractor’s settings.
To get settings of other plugins inside your extractor, Jump to Get Settings

Integrate (automated)

Integrate process is handled by IntegrateAsset in openpype/plugins/publish/integrate.py

It moves exported/rendered files to their publish path and do some automations that you don’t have to worry about.

Integrate requires:

  • Registering new families in integrate.py
  • Making sure that instance.data['representations'] is correct

Load

Which adds a load command for product types

Load class structure:

It’s required to inherit load.LoaderPlugin

  • Class Attributes
    • label
    • families
    • representations
    • order
    • icon
    • color
  • Class Methods
    • load
    • update
    • remove
    • switch

Load logic

  1. get_file_path
  2. get necessary data
  3. create_load_node_tree
  4. return containerised_nodes (it does some automations that you don’t have to worry about)
  5. It moves created nodes to the AVALON_CONTAINERS subnetwork
  6. Add extra parameters

You can customize loading by editing create_load_node_tree
where you can add more nodes or attributes.

Inventory Actions

They add actions in Manage (Inventory) for users to perform some actions on loaded assets.

Inventory Action class structure

It’s required to inherit InventoryAction

  • Class Attributes
    • label
    • icon
    • color
    • order
  • Class Methods
    • process
    • is_compatible (optional)

is_compatible

it can associate an inventory action to a particular loader

Attachments

Code Examples

These examples are from Houdini.

UML Diagrams

Made with pynsource

Creator Collector Validator example 1 Validator example 2
Extractor Loader Inventory Action

Further Reading

Order

Order is an int value that defines the order publish plugins are called

"""Common order values. """
import pyblish.api
from openpype.pipeline.publish import ValidateContentsOrder

# Collector 
order = pyblish.api.CollectorOrder  
print(order) # equals 0
# Validator
order = pyblish.api.ValidatorOrder 
print(order) # equals 1
order = ValidateContentsOrder + 0.1
print(order) # equals 1.2
# Extractor 
order = pyblish.api.ExtractorOrder 
print(order) # equals 2
# Integrator 
order = pyblish.api.IntegratorOrder
print(order) # equals 3

Toke Stuart: I think one of the design flaws of Pyblish is its use of order for scheduling plugins. The amount of plugins can quickly get complicated to a point where its hard to track dependencies between plugins.

Working with Settings

Get Settings

# Run in OP/Ayon Launcher console
from openpype.settings import get_project_settings

project_name = "RnD"
project_settings = get_project_settings(project_name)
print(project_settings["maya"]["publish"]["ValidateMayaUnits"])
# Run inside any host
import os
from openpype.settings import get_current_project_settings
from openpype.pipeline.context_tools import get_current_host_name

project_settings = get_current_project_settings()
print(project_settings["houdini"]["publish"]["ValidateWorkfilePaths"])

host_name = get_current_host_name()
print(project_settings[host_name]["publish"]["ValidateWorkfilePaths"])
# Publish plugins get settings by implementing apply_settings method
# This is only required if you want to get settings of another publish plugin
# for example I want to get 'CreateUnrealStaticMesh' settings inside 'ValidateUnrealStaticMeshName' plugin

@classmethod
def apply_settings(cls, project_settings, system_settings):

    settings = (
        project_settings["houdini"]["create"]["CreateUnrealStaticMesh"]
    )
    cls.collision_prefixes = settings["collision_prefixes"]
    cls.static_mesh_prefix = settings["static_mesh_prefix"]

Add new Settings

First of all, settings are static (they don’t change unless you do that manually)
It’s a two step process

  1. Update Schemas/Settings
  2. Update Settings default values

Attention: If you do settings changes in OpenPype, also do the change in AYON addons settings.
Always Match Settings.

[OpenPype]

Update Schemas
Openpype Settings Schema Example File: schema_houdini_create.json

In OpenPype, there two ways :

  1. Settings with custom keys
  2. Settings with template keys

Custom keys

{
    "type": "dict",
    "collapsible": true,
    "key": "CreateArnoldAss",
    "label": "Create Arnold Ass",
    "checkbox_key": "enabled",
    "children": [
        {
            "type": "boolean",
            "key": "enabled",
            "label": "Enabled"
        },
        {
            "type": "list",
            "key": "default_variants",
            "label": "Default Variants",
            "object_type": "text"
        },
        {
            "type": "enum",
            "key": "ext",
            "label": "Default Output Format (extension)",
            "multiselection": false,
            "enum_items": [
                {
                    ".ass": ".ass"
                },
                {
                    ".ass.gz": ".ass.gz (gzipped)"
                }
            ]
        }
    ]

},

Settings Template

Notice you won’t find CreateArnoldAss in the list "template_data" because it was already defined.

 {
  "type": "schema_template",
  "name": "template_create_plugin",
  "template_data": [
      {
          "key": "CreateAlembicCamera",
          "label": "Create Alembic Camera"
      },
      ... 
 }

Update settings default values
Openpype Default Settings Values Example File: houdini.json

Each key previously added in the schema should have a matched value in defaults.

for example check settings related to these two "CreateArnoldAss" and "CreateRedshiftROP" in the above example file.

[AYON]

AYON Settings Example File: publish_plugins.py

Update Schemas Equivalent
Currently, it’s done by editing the corresponding AYON settings file in server_addon
The above example file is the corresponding AYON Houdini settings file
where we add classes that inherit BaseSettingsModel

Update settings default values Equivalent
by scrolling a little bit down and adding a new dictionary item

Regarding product-type versus representation

Are these also all a single representation? E.g. is .bgeo the same representation as .bgeosc in the final publish?

What is the 1 type of data? This will need extra clarification of what this means (and also WHY AYON enforces it like that)

Could you provide an example of those multiple FBX product-types? And with your explanations, also explain how they differ from just being a different representation?


I think it’s much easier to first explain actually what Product Type and Representation intends to be with a few examples instead of quickly deep-diving into it.

  • Product-Type: A product containing a specific type of information, e.g.

    • pointcache/animation: a character animation output as cache of its geometry only (no controls, no bones; just geometry cached)
    • camera: a single camera
    • model: a static (clean) mesh adhering to studio rules like naming conventions, poly-flow, usually intended to be used as the clean geometry representing an asset.
  • Representation: A file output for a product type.

    • e.g. the data of pointcache product-type could be stored in any format supporting geometrical caches, like Alembic, USD, bgeo, etc.
    • e.g. the data of camera product-type could be stored in any formatting supporting a single animated camera, e.g. Alembic, Maya scene, USD, etc.

Note how representation is just a different file format for the same product type - as such, a single product type could have multiple representations which should technically (for as far as the file formats allow) contain all the data of that product-type.

As such, another example (that might currently not exist) could be:

  • A sketetal animation (with blendshapes) product type could be stored in multiple representations that support that, e.g. FBX supports skeletal animation, USD supports skeletal animations, GLTF supports skeletal animation.

yes, .bgeo is the same representation as .bgeosc.
I made a quick test, I published the same instance with all different Bgeo flavors and OP was fine!

I think it should rephrased, however I don’t know how at the moment.
all what I know that we need specify a particular product type because plugins can be so specific for example alembic validators won’t work with Bgeo validators
alembic point cache validators won’t work with alembic camera validators

I’ll try :ok_hand:


Post has been updated.

From this text, I understand that a Product-Type does not decide which Representation will later be used.
So, the “PointCache” Product-Type does not necessary lead to a Bgeo file format, as it could be an abc file.


.

.
But this screenshot, in the contrary, lets you choose a Product-Type (“publish type” in GUI) that already chooses between a Bgeo or an abc file format.

ProductTypeB

Is it the right place to decide what representation we choose ?

Most of the time, artists won’t worry about how their work is represented and the don’t have to need to as it’s handled by the pipeline.

However, It’s possible to give the artist the choice to choose the format that seems convenient to them, This can be achieved by adding options in the creator UI.

In contrast, Houdini (where the screenshot above was taken) followed different methodology where we chose to use the representation type explicitly as long as with the product type as e.g.
Bgeo point caches requires quite distinct validations than Abc point caches.

This is actually a long standing pet pieve of mine. We should not be embedding the representation information like this into a creator, but instead the user should choose to create Point Cache and then enable/disable representations in the instance options.

Thank you, the differentiation of product type and representation you put down is almost perfect. There is one extra caveat though that, conceptually, doesn’t fit those descriptions, but in practice is almost unavoidable.

Representations of a. single product do not need to contain all data of that product type. A common example is a model product type. It could very well have 3 representations

  • ma (geometry data as maya ascii)
  • abc (geometry data as maya alembic)
  • mov (turnaround video of the model)

The key point is that the video doesn’t contain practically any of the actual model data, yet it accurately “represents” it in a different way. Keeping this in mind is crucial and is the main reason why it is called representation and not a file format.

Another example is with plates or render. plateMain coming from set, could be published as representations

  • exr (raw exr sequence for comp and further work)
  • h264_noHandles (preview with handles cut off, for easier online preview)

That noHandles representation is actually crucial to have because it is not possible to automatically hide handles in the online review tools like ftrack review or syncsketch and by keeping them there, the director will just be annoyed and not seeing the shot hookups correctly. However, those two clear don’t hold the same data anymore. Instead they merely represent the state of work at a given point in time in some useful fashion.

1 Like