Enhancing Houdini integration

Thank you for starting this document. Let me try first give you a collection of some of my thoughts and then I will respond to the questions you raised in another reply.

As I said in another post, my practical experience with OP in Houdini is still quite recent and minimal and I might be missing some of the functionalities that exist… so check me if I say anything wrong.

From my experience the main problem that I see with OP in Houdini is that it’s too opinionated on how artists need to work and the outputs that it supports and it’s very different to the normal node-based workflow they are used to (Houdini artists don’t want and shouldn’t need to open a separate UI to run a cache/extract/publish, load “subsets” or manage the loaded versions, they should be able to do all of that through the parameter interfaces of the nodes). Houdini already provides multiple ways for writing stuff out to disk either locally or in the farm through vanilla farm orchestrators (like I described here https://github.com/ynput/OpenPype/pull/5621#issuecomment-1732166830) and most artists and studios will have a bunch of utility tools for caching, rendering and their own very particular ways of working, with parameter presets on the vanilla Houdini nodes or HDAs that export custom data, that’s the beauty of Houdini. However, if you want to publish any of those outputs to the OP database right now you are almost enforced to stick to its Creator nodes, which until very recently you could only create through the Creator UI and not like a normal node creation in Houdini. Thankfully now we can also create the creator nodes directly through the tab search but even then, artists don’t remember they have to create a different node and those nodes are too rigid with hard-coded parameters and it doesn’t cover all the types of outputs you’d want to be able to publish.

The first MVP OP integration for Houdini should just provide a means to be able to publish any of the generated outputs to the database so other downstream artists can consume it. OP shouldn’t have an opinion whether the family has to be called a “mantra_rop”, “arnold_rop”, or any of the hard-coded family names, most of the outputs are simply either a single file or a sequence and the studio should be free to choose how they want to name that output in the database, which metadata to include and what “family” to put it under (“render”, “geometry”, “base_model”, “geocache”, “reference”, “contact_sheet”, “aov”, “image_sequence”, “groom” or “potato”), OP’s barebones shouldn’t have an opinion on what validations you need to run for these. That thin integration would be very easy to adapt to most studios and any Houdini TD would be able to understand and build upon. I could have my own extractors that create textures, grooms, curves, volumes, models, geometry caches, terrains… and I would simply use the OP publisher as a way to register those to the database so they are version controlled and enforce the studio’s naming conventions (although even this shouldn’t be a stop gap, a studio could trust their artists common sense and not enforce a convention). What’s clear is that we don’t need to enforce any naming conventions or workflow during the creation/extraction phase, you only apply those when you register that data in the publish.

The most barebones integration could start with these three nodes:

Generic ROP Publish (OP/Ayon Publisher)
Given a path (or multiple for being able to write multiple representations) to disk, a frame range (or whether it’s a single file), a subset name, a family and the destination context… when the node gets cooked it simply copies (or creates a hardlink) of that data to the publish directory and registers it to the database.

SOP Cache node (OP/Ayon Cacher)
Basically a node like the one @MichaelBlackbourn described here AYON Houdini Universal Cache Node - #2 by MichaelBlackbourn which internally would simply contain a ROP network that references that Generic ROP Publish with some channel references and defaults on how to publish the generated SOP output.

SOP Loader node (OP/Ayon Loader)
Another utility SOP HDA node that has context parameter dropdown to choose the current “asset” (defaulting to the current one on the session), the “family” (it could be filtering the “families” that are “SOP” supported and can be loaded with the nodes contained in this HDA), the “subset” name, the “version” and the “representation”. The OP scene manager could find all of the “loader” nodes and list the versions loaded so you can control the versions from the scene manager but you would be able to just change the version directly from the node parm interface version dropdown.


Once you have that barebones layer, you can then start to assemble those low level building blocks to streamline workflows, run validations, set naming conventions on how and where data is written and reduce user error (i.e., example nodes: “Render publisher”, “Lookdev Publisher”, “Geometry publisher”, “Groom loader”…) but that should be the first layer that OP provides. Studios can then choose which layer of integration they want to use, whether they just use those barebone nodes in their custom ones or they also adopt the OP’s utility nodes (and the community would very easily provide new ones).

At the end of the day, this kind of boils down again to the same problem I have been raising for a bit of time, I think the OpenPype API fails to provide the basic building blocks for developers in an elegant way to expand and make use of without writing a lot of boilerplate code and needing to fit it on the pyblish plugin mechanism. Take a look at ftrack or Shotgrid python APIs for example and how simple it is to build on top of. It’s irrelevant whether you are using that API in the farm or locally or what kind of family you are writing, the API is the same, it’s quite intuitive and it’s not enforcing you to run any validations or extractors before you are able to publish any data. You can query any type of entity published on their databases and apply very simple functions to those entities: (1) you can create/query any entity and very simply add new children entities to it (i.e., components/representations to a version), (2) you can add new metadata or modify attributes on any of the entities, (3) you can set dependencies between entities… all of those things in a very intuitive object oriented fashion. On top of that you have the ftrack integrations or ShotGrid toolkit that use those APIs to provide UIs, utility tools and other things to streamline workflows but the core API is simple and flexible.

Now try to map those same features and how you can do them in OP’s. I think there’s a decent module to query entities and show the data that’s been published but when it comes to creating new entities or manipulating existing ones, I think it fails to provide those in a simple manner, making it very rigid and hard to provide flexible workflows. The pyblish plugin framework to collect, validate, extract and publish is very useful in a lot of scenarios, but the lower level API backend that those are built on should be easy to use and should allow you to run the “publish” directly without all of the other steps. I should be able with a few lines of code to provide a publisher to OP from any tool (i.e., publish a render by exposing a simple action in Deadline or directly from RV after reading a render sequence).

For example, look at the TrayPublisher and this discussion we had about it here https://github.com/ynput/OpenPype/pull/5195#issuecomment-1612100673. You’d expect that tool to be one of the thinnest layers of interaction with the OP database where it provides just a utility UI on top of the core OP functions, where given some existing data and some inputs, it runs a publish. And with that assumption, you’d expect you’d be able to reuse most of the same internal code it uses and be able to run the same thing in the farm or use the APIs it uses to do the same publishes elsewhere… but the reality could not be further from the truth, there’s so much code and validations specific to the TrayPublisher that most of its code is useless outside of it and it takes quite a bit of time to understand how it’s all put together. It’s a great tool for simple one-off publishes but it’s very cumbersome and manual if you want to do multiple publishes at the same time or calling it programmatically.