Batch publishing latest workfiles

I’ve seen this question come up from time to time in the form of “Is there any way to batch publish workfiles?” where the request is to for example quickly publish the latest animation from the animation tasks for shots 1 through 20 without opening shots manually and publishing each individually.

Note: This is a different issue than batch ingesting editorial data. The Tray Publisher allows to do that and there like editorial with ingesting edl, xml or something similar for animatics or edits.

In short, this topic is intended to be about how to publish multiple workfiles in rapid succession.

The idea is to select a bunch of shots 1-20, their tasks “animation” and queue them to get the latest workfiles published through the relevant host like Maya.

Prototype needs

At the core it’s just a sequence of:

  1. Open scene
  2. (Optional) Run a custom pre-export script to allow custom cleanup, e.g. update all loaded containers
  3. (Optional) Select the instances (or the types of instances like a filter) to publish, e.g. cameras only
  4. (Optional) Potentially tweak settings on these instances ← could be part of custom pre-export
  5. (Optional) Allow disabling/enabling certain validations
  6. (Optional) Allow automatically triggering a certain repair action of a validator if it fails.
  7. Trigger publish

Which (prior to the new publisher) without the optional steps inside a DCC would just be open file and publish, like this:

from openpype.pipeline import registered_host
import pyblish.util

host = registered_host()

for filepath in filepaths:
    host.open_file(filepath)
    pyblish.util.publish()

Note that this doesn’t report whether the publish succeeded or not, there’s a remote_publish implementation in OpenPype currently which does do that but it would clutter the pseudocode example here.

Publish through deadline

Taking the idea laid out above I also wanted to allow that queue of files to be submitted to Deadline which would then generate a deadline job per openpype workfile to process with the correct host. That way multiple shots could also be published on the farm at the same time by different machines.

Publish only shots which have newer workfiles

I’ve also written some logic on my end to get me all publishes with their timestamp + source workfile path so I can compare whether since that publish time there’ve been updates to the shots but no publishes. So that I can basically lookup “these shots have newer version workfiles but no publishes”. I’d actually want to then use that information to batch publish shots 1-20 for which that is the case to avoid making redundant publishes. :slight_smile:

Publish downstream

Having this implemented would then also easily allow downstream publish jobs like:

  • Animation publish is finished → then queue lighting shot with updated containers to get published as well.

It’d just mean the same steps as above but for the “lighting” task for example.

Publish current shot on farm

Somewhat related but not entirely the same is an artist wanting to publish his currently opened workfile but not wanting to wait for the publish. The step of being able to just “submit to deadline” is then basically what would be needed. I’d recommend actually that the artist would still get all publish validations to run - and if valid, then a job gets submitted to deadline which opens the file with the same application + tools env, triggers the publish of the selected instances and reports if there are issues. So this is related to the Publish on deadline chapter above but not entirely the same.

Prototype

Taken from this we can say what’s really needed is just a simple entry point to publish a scene without GUI which can perform the 7 steps mentioned in the prototype needs.


Ideas / Remarks?

I’m wondering whether this’d be of interest? Why / why not? What things am I missing in what others might need? And especially if anyone has any design ideas on how this could best be presented to the artist to easily batch submit things like these. Any example UI designs, etc?

1 Like

Might be outside of the scope of this, but being able to build a templated workfile if no scene is found would allow for quickly setting up lighting or fx work files.

1 Like

Thanks for this proposal @BigRoy ! I can definitely see this being useful for us down the line and I’m currently also investigating on the space of “manual” headless publishing toolsets for running transcode and extract review plugins on-demand.

The way I have seen what you propose implemented in the past isn’t through “workfiles” but through the subset/product (I will call it “products” from now on following the great news of Renaming Subset to Product) dependency links and the data contained within. So for example, on an “animated cache” product, you hold the (input) dependencies of other products, such as a “rig” and “anim curves” products. With that information, you could set up a framework that once the (input) dependencies get a new version published (or any event like a specific status is set), it automatically autogenerates the products that depend on it. On an ideal pipeline you don’t even need a specific workfile to run the publish from as the “input dependencies” products should describe all the dependencies required to re-create that product, so you should be able to load them all on the compatible host and run the code that exports/publishes a new version of the product (or potentially using templated workfiles that you set as extra input processes as @tokestuartjepsen suggested). However, for some products there might be other input dependencies that aren’t tracked on the AMS and only exist on the workfile that was used to create them… on which case, we should be able to grab the “workfile” dependency and use that instead like you are proposing.

The idea behind this is that you can define processes (we called them “autogens”) between asset dependencies and have more granular control over what gets automatically generated (although as I said you can make it so it’s a push mechanism instead and set more complex flow filters) and not just “workfiles” as those can be quite generic and could be trickier to know all the products defined inside and which workfile to use (with “products” you are confident that’s your end goal). On the other hand, the asset dependencies would auto propagate through your whole dependency graph automatically… so if the “anim cache” product is another input dependency of another product (i.e., “render”) you could set it up so once it got automatically updated, it triggers the next autogeneration and so on (i.e., an update of your animation curves could end up automatically giving you a comped image with the animated cache).

All of these processes would be triggered by a dedicate machine that basically just tracks all these dependency updates and delegates the jobs to the farm (or runs them locally).

Does OpenPype (or perhaps only supported in Ayon) have the notion of input vs output link dependencies on the products?

Hi,
Thanks for sharing this need! The funny thing is I recently roughly made a prototype of such system as a conformation tool for Blender + Kitsu in our studio addon. It can be configured and triggered using a CLI and you can provide a list of scripts you want to execute on files (made a PR to share this feature), you can filter the families to publish, eventually pick an older or a specific workfile version (made a PR to share this feature), validate or publish, run in background or keep the GUI open, and can process tasks workfiles by their Kitsu entity types configured in the project (Character, Environment, Shot…) or by task types (Layout, Animation) for a dedicated episode, or provide a list of single entities… That’s pretty much all.
A command could look like: .poetry/bin/poetry run python start.py module normaal_addon publish-workfile -p WoollyWoolly -e ClaudineSalon_006_A_JOUR -t Fabrication -f setdress -P reset_containers -P restore_lib_filepaths -P confo_remove_invalid_instances_in_env -P confo_world_in_env -P confo_container_not_found -P confo_curveknit_versioned -P confo_review_instance -P confo_modifiers -c "Correction workfile et setdress, world dans setdress (auto confo)" --login my.mail@blabla.fr --password **** --keep-open --publish
NB: after a -P you can provide either a full path to a script or only the name of those which are in a referenced directory coded in the addon. It allows to pick existing files while developing yours locally before sharing it. The point is to make it easy for TDs who script without asking the core pipe team to do it, and eventually they can submit it later for review if this process is likely to be frequently required.

I wanted to clean it and share it by pieces, but it’s always too long to achieve.
Hope this feedback helps, I’d be glad to discuss it or share more quick and dirty code with you.

@Tilix4 Thanks - that’s great to hear you’ve been working on this. Is your particular module + script available in a public repo?

I understand the need for the setting the workfile version for an asset but I was thinking of an interface that at the core API just takes the direct filepath since it’s a more explicit API and your specific usage can be built on top of that.

Psuedocode legacy publisher

As psuedocode like this:

def publish(
    project_name, 
    asset_name, 
    task_name,
    filepath,
    comment=None,
    pre_publish_scripts=None,
    post_publish_scripts=None
):
    """Publish instances from workfile.
    
    Arguments:
        project_name (str): Project name
        asset_name (str): Asset name
        task_name (str): Task name
        filepath (str): Absolute filepath to publish from
        comment (str, optional): Project name
        pre_publish_scripts (list, optional): List of absolute
            python scripts to trigger from within the host before
            the publishing starts.
        post_publish_scripts (list, optional): List of absolute
            python scripts to trigger from within the host after
            the publishing finishes.    
        
    """
    
    # launch app in context
    # psuedocode of course since this should run outside the app
    # but is just to show that the app requires launching in context
    launch_app(project_name, asset_name, task_name)
    
    # open file
    host = registered_host()
    host.open_file(filepath)
    
    for script in pre_publish_scripts or []:
        run_script(script)
    
    # Publish
    error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" 
    for result in pyblish.util.publish_iter():
        for record in result["records"]:
            log.info("{}: {}".format(result["plugin"].label, record.msg))

        # Exit as soon as any error occurs.
        if result["error"]:
            # TODO: For new style publisher we only want to DIRECTLY stop
            #  on any error if it's not a Validation error, otherwise we'd want
            #  to report all validation errors
            error_message = error_format.format(**result)
            log.error(error_message)
            break
    
    for script in post_publish_scripts or []:
        run_script(script)

With an example usage:

# Example usage
publish(project_name="project",
        asset_name="asset",
        task_name="task",
        filepath="/abs/path/to/file.blnd",
        pre_publish_scripts=["/pre_update_all_container.py", "/pre_remove_invalid_instances.py"],
        post_publish_scripts=None)     

And of course that could be exposed to a command line interface as well.

Not as easy with the new publisher?

The publishing part is relatively trivial with the Pyblish API but I haven’t been able to do the same with the new publisher. I got to this pseudocode to support both:

def publish(...):
    ...

    error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}"

    host = registered_host()
    is_new_publisher = isinstance(host, IPublishHost)
    if is_new_publisher:
        # New publisher host
        # TODO: Test this functionality
        from openpype.pipeline.create import CreateContext
        create_context = CreateContext(host)

        # TODO: Allow to tweak any values on existing instances
        # if tweak_instances_fn:
        #     tweak_instances_fn(create_context.instances)
        #     create_context.save_changes()
        #     create_context.reset()

        pyblish_context = pyblish.api.Context()
        pyblish_context["create_context"] = create_context
        pyblish_plugins = create_context.publish_plugins
    else:
        # Legacy publisher host
        pyblish_context = pyblish.api.Context()  # pyblish default behavior
        pyblish_plugins = pyblish.api.discover()  # pyblish default behavior

    # TODO: Allow a validation to occur and potentially allow certain "Actions"
    #   to trigger on Validators (or other plugins?) if they exist

    for result in pyblish.util.publish_iter(
        context=pyblish_context,
        plugins=pyblish_plugins
    ):
        for record in result["records"]:
            log.info("{}: {}".format(result["plugin"].label, record.msg))

        # Exit as soon as any error occurs.
        if result["error"]:
            # TODO: For new style publisher we only want to DIRECTLY stop
            #  on any error if it's not a Validation error, otherwise we'd want
            #  to report all validation errors
            error_message = error_format.format(**result)
            log.error(error_message)
            break

In particular note the TODOs there. I still have to create a dedicated issue so I can maybe poke @illicit to document (and maybe build) the API to trigger publishes with the new publisher method without GUI, preferably in a way so that you can still export the publish json report too. But I couldn’t find that method as of yet.

Anyway, definitely share your quick and dirty stuff @Tilix4 because I’m hoping to dig more into this tomorrow!

My goal is to basically submit these jobs to Deadline so I can just batch publish e.g. 80 workfiles easily.

1 Like

Sure thing, here it is:

import os
import click
import gazu
from pathlib import Path

from openpype.modules.base import ModulesManager
from openpype.pipeline.mongodb import AvalonMongoDB
from openpype.lib.local_settings import get_local_site_id
from openpype.modules.kitsu.utils.credentials import (
    validate_credentials,
    set_credentials_envs,
)

def run_scripts_on_task_workfile(
    project_name: str,
    asset_name: str,
    task_name: str,
    python_scripts: List[str],
    families: List[str],
    comment: str,
    script_args: List[str],
    keep_workfile_open: bool = False,
    workfile_version=-1, 
) -> Popen:
    """Launch Blender with the given python scripts and publish the workfile.

    Args:
        project_name (str): The project name.
        asset_name (str): The asset name.
        task_name (str): The task name.
        python_scripts (List[str]): The python scripts paths to run.
        families (List[str]): The families to publish the workfile with.

    Returns:
        Popen: The Blender process.
    """
    return ApplicationManager().launch(
        app_name="blender/3-3",
        app_args=["-b"] if not keep_workfile_open else [],
        project_name=project_name,
        asset_name=asset_name,
        task_name=task_name,
        python_scripts=[*python_scripts,
                Path(publish_blender_workfile.__file__).as_posix(),
        ],
        script_args=[*script_args, "--families", *families, "--comment", comment],
        workfile_version=workfile_version,
    )

def download_subset(
    project_name, asset_name, subset_name, ext="blend", hero=False
) -> dict:
    """Download the representation of the subset last version on current site.

    Args:
        project_name (str): The project name.
        asset_name (str): The asset name.
        subset_name (str): The subset name.
        ext (str, optional): The representation extension. Defaults to "blend".
        hero (bool, optional): Use hero version.

    Returns:
        dict: The subset representation.
    """
    asset = get_asset_by_name(project_name, asset_name, fields=["_id"])
    if not asset:
        return

    subset = get_subset_by_name(
        project_name,
        subset_name,
        asset["_id"],
        fields=["_id"],
    )
    if not subset:
        return

    version = None
    if hero:
        version = get_hero_version_by_subset_id(
            project_name,
            subset["_id"],
            fields=["_id"],
        )
    if not version:
        version = get_last_version_by_subset_id(
            project_name,
            subset["_id"],
            fields=["_id"],
        )
    if not version:
        return

    representation = next(
        get_representations(
            project_name,
            version_ids=[version["_id"]],
            context_filters={"ext": [ext]},
        ),
        None,
    )
    if not representation:
        return

    # Get sync server
    modules_manager = ModulesManager()
    sync_server = modules_manager.get("sync_server")
    local_site_id = get_local_site_id()

    # Add linked representations
    representation_ids = {representation["_id"]}
    representation_ids.update(
        get_linked_representation_id(
            project_name, repre_id=representation["_id"]
        )
    )

    # Add local site to representations
    for repre_id in representation_ids:
        # Check if representation is already on site
        if not sync_server.is_representation_on_site(
            project_name, repre_id, local_site_id
        ):
            sync_server.add_site(
                project_name,
                repre_id,
                local_site_id,
                priority=99,
                force=True,
            )

    return representation


def wait_for_download(project_name, representations: List[dict]):
    """Wait for download of representations.

    Args:
        project_name (str): Project name.
        representations (List[dict]): List of representations to wait for.
    """
    # Get sync server
    modules_manager = ModulesManager()
    sync_server = modules_manager.get("sync_server")

    # Reset timer
    sync_server.reset_timer()

    # Wait for download
    local_site_id = get_local_site_id()
    start = time()  # 5 minutes timeout
    while (
        not all(
            sync_server.is_representation_on_site(
                project_name, r["_id"], local_site_id
            )
            for r in representations
            if r
        )
        and time() - start < 300
    ):
        sleep(5)

@cli_main.command(
    context_settings=dict(
        ignore_unknown_options=True,
    )
)
@click.option(
    "--login",
    prompt=True,
    help="Kitsu login",
)
@click.option(
    "--password",
    prompt=True,
    hide_input=True,
    help="Kitsu password",
)
@click.option(
    "-p", "--project-name", prompt=True, help="Project name", required=True
)
@click.option(
    "-e",
    "--entity-name",
    "entities_list",
    help="Entity name",
    multiple=True,
    default=[],
)
@click.option(
    "-A",
    "--asset-type",
    "asset_types",
    help=(
        "Asset type name to publish all assets from this type"
        " (case sensitive, usually Capitalized)"
    ),
    multiple=True,
    default=[],
)
@click.option(
    "-ep", "--episode", "episodes", help=(
        "Episode name to process all shots from this episode"
    ),
    multiple=True,
    default=[],
)
@click.option(
    "-l",
    "--assets-list",
    "entities_list_path",
    help=("Path to a text file containing a list of assets to publish"),
)
@click.option(
    "-s",
    "--skip-assets-list",
    "entities_to_skip_list_path",
    help=("Path to a text file containing a list of assets to skip"),
)
@click.option(
    "-f",
    "--family",
    "families",
    help="Family to publish",
    multiple=True,
    required=True,
)
@click.option(
    "-v",
    "--version",
    "workfile_version",
    help="Workfile version",
    default=-1,
)
@click.option(
    "-P",
    "--python-script",
    "python_scripts",
    help="Script to execute before publish",
    multiple=True,
)
@click.option(
    "-t",
    "--task-type",
    "task_types",
    help="Task type name (case sensitive, usually Capitalized)",
    multiple=True,
    required=True,
)
@click.option(
    "-c", "--comment", help="Comment to add to the publish", required=True
)
@click.option("--no-download", help="TODO", default=False, is_flag=True)
@click.option("--keep-open", help="TODO", default=False, is_flag=True)
@click.argument("unknown_args", nargs=-1, type=click.UNPROCESSED)
def publish_workfile(
    login: str,
    password: str,
    project_name: str,
    entities_list: List[str],
    asset_types: List[str],
    episodes: List[str],
    entities_list_path: List[str],
    entities_to_skip_list_path: List[str],
    workfile_version: int,
    families: List[str],
    python_scripts: List[str],
    task_types: List[str],
    comment: str,
    no_download: bool,
    keep_open: bool,
    unknown_args,
):
    """Builds a workfile with: TODO
    - All the rigged characters casted in a shot.
    - Lipsync actions assigned to each character in the scene.
    - Animation instance for each armature.
    """
    # Cast to list
    entities_list = list(entities_list)
    python_scripts = list(python_scripts)

    # Check asset or asset type
    if not entities_list and not asset_types and not episodes and not entities_list_path:
        raise ValueError(
            "You must provide either an asset name or an asset type "
            "or an episode name or a path to an assets list file."
        )

    # Validate python scripts paths
    for i, script_path in enumerate(python_scripts):
        if not Path(script_path).is_file():
            confo_filepath = (
                Path(__file__)
                .parent.joinpath("scripts", "conformation", script_path)
                .with_suffix(".py")
            )
            if confo_filepath.is_file():
                python_scripts[i] = confo_filepath.as_posix()
            else:
                raise ValueError(f"Invalid filepath: {script_path}")

    set_credentials_envs(login, password)
    validate_credentials(login, password)

    # Start sync server
    active_site = get_local_site_id()
    os.environ["OPENPYPE_LOCAL_ID"] = active_site
    manager = ModulesManager()
    sync_server_module = manager.modules_by_name["sync_server"]
    sync_server_module.server_init()
    sync_server_module.server_start()

    # Get all assets to skip from a list
    entities_to_skip = set()
    if entities_to_skip_list_path:
        entities_to_skip_list_path = Path(entities_to_skip_list_path)
        if not entities_to_skip_list_path.is_file():
            raise ValueError(f"Invalid filepath: {entities_to_skip_list_path}")

        # Get assets from file
        with Path(entities_to_skip_list_path).open() as f:
            entities_to_skip.update(
                asset_name.strip()
                for asset_name in f.readlines()
                if not asset_name.startswith("#")
            )

    # Check single asset is not in skip list
    if len(entities_list) == 1 and entities_list[0] in entities_to_skip:
        raise ValueError(f"Single Asset '{entity_name}' is in skip list.")

    # Get all assets of an asset type
    project = gazu.project.get_project_by_name(project_name)
    for asset_type_name in asset_types:
        # Get asset type
        asset_type = gazu.asset.get_asset_type_by_name(asset_type_name)
        assets = gazu.asset.all_assets_for_project_and_type(
            project, asset_type
        )

        for asset in assets:
            # Check asset is not in skip list
            #if asset["name"] not in entities_to_skip:
                entities_list.append(asset["name"])

    # Get all assets from a list
    if entities_list_path:
        entities_list_path = Path(entities_list_path)
        if not entities_list_path.is_file():
            raise ValueError(f"Invalid filepath: {entities_list_path}")

        # Get assets from file
        with Path(entities_list_path).open() as f:
            entities_list.extend(
                asset_name.strip()
                for asset_name in f.readlines()
                if not asset_name.startswith("#")
                #and asset_name.strip() not in entities_to_skip
            )

    # Get all shots of an episode
    dbcon = AvalonMongoDB()
    dbcon.Session["AVALON_PROJECT"] = project_name
    for episode_name in episodes:
        # Get episode
        episode = gazu.shot.get_episode_by_name(project, episode_name)
        shots = gazu.shot.all_shots_for_episode(episode)

        for shot in shots:
            # Get shot from database by matching zou id
            # TODO not optimized, should find all in one query
            shot_doc = dbcon.find_one(
                {
                    "type": "asset",
                    "data.zou.id": shot["id"],
                }
            )

            # Check asset is not in skip list
            #if not {shot["name"], shot_doc["name"]} & set(entities_to_skip):
            entities_list.append(shot_doc["name"])

    representations = []

    # Run for listed assets
    for entity_name in sorted(set(entities_list) - entities_to_skip):
        for task_name in task_types:
            if not no_download:
                repre = download_subset(
                    project_name, entity_name, f"workfile{task_name}"
                )
            else:
                repre = None
            representations.append((repre, entity_name, task_name))

    # Wait for all downloads and run scripts
    if not representations:
        raise ValueError(
            f"No any representation found in '{project_name}' for assets: {entities_list}"
        )

    for repre, entity_name, task_name in representations:
        if repre:
            wait_for_download(project_name, [repre])
        run_scripts_on_task_workfile(
            project_name,
            entity_name,
            task_name,
            python_scripts,
            families,
            comment,
            unknown_args,
            keep_workfile_open=keep_open,
            workfile_version=workfile_version
        )

    # Log out
    gazu.log_out()

Required features

In exchange, could you please try to make it as Deadline independent as possible? A true clean API I could reuse for other hosts? I’m sure that’s your goal and I wish it from the deep of my heart.

1 Like

Thanks so much!

In exchange, could you please try to make it as Deadline independent as possible? A true clean API I could reuse for other hosts? I’m sure that’s your goal and I wish it from the deep of my heart.

That’s the goal. In the end all I want is deadline to just run the CLI command with the same result.

Especially because I also want to have something I can run quickly as a one off from separate scripts and of course it’s much easier to test if I don’t need Deadline in the middle to run it.


I’ll play around a bit today and will see how far I can get.

We have several publish cli commands for headless publishing, and all of them are customized. Even in this topic were mentioned multiple types of publishing with different workflow in mind and it’s really hard to met all requirements in single command. The current publish command, which is used in deadline should be probably moved to deadline addon to avoid the confusion.

For example if you want to launch a DCC and run publishing, the publishing logic cannot be in the OpenPype publish function, but in the host which cares about the DCC (the publishing must run inside the DCC). If the host integration is more complicated e.g. is launched with custom process for communication (e.g. photoshop, aftereffects, harmony, tvpaint,…) they need completelly different workflow. My point being we should not try to solve all the problems with one function. It is just not possible.

We should prepare function to run headless publishining inside DCC but that’s where it ends.

I’ve thought about this and I think the best way would be if a certain application or host addon could implement a run_script interface which all it does is:

  • Run the host (within OP environment, etc.)
  • Run script with Python
  • Close the host

That way at least the addon/integration defines HOW scripts should run in the host and we have a matching entry point for different hosts allowing the range of possibilities needed. Right?

# Note that of course these should also take in the OP environment for project, 
# asset, task, etc. but for simplicity I've ommitted that from the interface

# blender implementation
def run_script(script_path: str, headless: bool=True) -> Popen:
    app_args = ["-b"] if headless else []  # I think the flag here is wrong, consider it pseudocode
    app_args.extend(["-P", script_path])
    return launch_app("blender", app_args=app_args)

# maya implementation
def run_script(script_path: str, headless: bool=True) -> Popen:
    
    executable = "maya"
    if headless:
        # ensure no gui is visible
        if os == "windows":
            executable = "mayabatch"
        else:
            executable = "maya -batch"
    
    app_args = ["-script", script]
    return launch_app(executable, "-script", script)

# etc..

That way for each host we can define the required functionality to run a script within the host.
Then the default publish in a host script would be something like:

ensure_host_installed()  # for hosts that do NOT auto-install on launch, e.g. fusion?
host = registered_host()
host.open_file(path)
publish()

By having the entry point be ‘any script’ it means we can also use it for other things than publishing.

I initially considered building the running of scripts into the host integrations. For example have “host install” methods check for e.g. a particular env var and if it did find it that it would continue running that script directly which would work for any host that already “installs” on launch but does not allow e.g. switching an application to run in a no-gui mode since it triggers after launch. For maya for example we need to run a different executable alltogether on windows for headless mode. There are also some that do not auto-install on launch either, like Fusion.


We have several publish cli commands for headless publishing, and all of them are customized.

@iLLiCiT For sake of the conversation, could you link the different CLI publishing entry points currently available in OpenPype in the repository?

Maya should have the logic inside maya addon. You have to define different startup logic for maya to start do different things on start (e.g. to start headless publishing) and only maya addon can know what exactly is needed. I would not try to do it “universally” for all DCCs, it could work for 1 or 2 in a similar way but then you’ll hit limits with others (more than 10 hosts).

As far as I know, they’re not in public repository. They are custom implementations for clients, for their specific workflow. Different sources, different cases.

Yes, that’s what I meant. So maya addon implements a run_script interface, blender addon does too, etc. So that I can just do: openpype addon maya run_script my_script.py or whatever CLI makes sense to run an addon command.

1 Like

So that I can just do: openpype addon maya run_script my_script.py or whatever CLI makes sense to run an addon command.

In that case this is the right direction approach.
You reminded me that I wanted to add addon cli subcommand.

1 Like

I have a working prototype currently that with some quirks here and there does work with Maya, Blender, Fusion and Houdini.

1. run-script

I’ve set up an interface where I can trigger a run-script against different applications, like:

.poetry\bin\poetry run python start.py module launch_workfile_scripts run-script -project test_local -asset asset1 -task modeling -app houdini/19-5-435-Core -path "E:\hello_world.py"

Console output:

>>> run disk mapping command ...
>>> Logging to server is turned ON
*** Cannot get OpenPype path from database.
>>> loading environments ...
  - for Avalon ...
  - global OpenPype ...
  - for modules ...
*** OpenPype [ 3.15.10-nightly.1 ] ----------------------------------------------------------------------------------------------------------------------------------------------
>>> Using OpenPype from [  E:\openpype\OpenPype  ]
... OpenPype variant:      [  production  ]
... Running OpenPype from: [  E:\openpype\OpenPype  ]
... Using mongodb:         [  localhost  ]
... Logging to MongoDB:    [  localhost  ]
...   - port:              [  27017  ]
...   - database:          [  openpype  ]
...   - collection:        [  logs  ]
...   - user:              [  <N/A>  ]
Searching application: houdini/19-5-435-Core
Found application: houdini/19-5-435-Core
>>> [  Opening of last workfile was disabled by user  ]
>>> [  It is set to not start last workfile on start.  ]
>>> [  Launching "houdini/19-5-435-Core" with args (2): ['C:\\Program Files\\Side Effects Software\\Houdini 19.5.435\\bin\\hython', 'E:\\hello_world.py']  ]
*** WRN: >>> { PostStartTimerHook }: [  Couldn't find webserver url  ]
houdini/19-5-435-Core: Installing OpenPype ...
houdini/19-5-435-Core: Setting scene FPS to 25
houdini/19-5-435-Core: Hello World
Application shut down.

2. publish

Then I’ve built a simple script which just opens a file, triggers a publish, etc. Note that each host uses that exact same publish_script.py file.

Here are example shell commands I used and tested:

:: blender 3.51
.poetry\bin\poetry run python start.py module launch_workfile_scripts publish -project test_local -asset asset1 -task modeling -app blender/3-51 -path "C:\projects\test_local\asset1\work\modeling\test_local_asset1_modeling_v002.blend"

:: maya 2023
.poetry\bin\poetry run python start.py module launch_workfile_scripts publish -project test_local -asset asset1 -task modeling -app maya/2023 -path "C:\projects\test_local\asset1\work\modeling\test_local_asset1_modeling_v005.ma"

:: fusion 18
.poetry\bin\poetry run python start.py module launch_workfile_scripts publish -project test_local -asset asset1 -task modeling -app fusion/18 -path "C:\projects\test_local\asset1\work\modeling\test_local_asset1_modeling_v015.comp"

:: houdini 19.5.435 (core)
.poetry\bin\poetry run python start.py module launch_workfile_scripts publish -project test_local -asset asset1 -task modeling -app houdini/19-5-435-Core -path "C:\projects\test_local\asset1\work\modeling\test_local_asset1_modeling_v004.hip"

The application’s output gets logged to stdout. (stderr redirected to stdout just for easy access)

Quirks:

  • Biggest quirk is mostly with Fusion:
    • fusion.exe can only run in GUI mode and doesn’t auto close afterwards - it’s solvable but pretty Fusion specific. I haven’t tried installing the Fusion Render Node and triggering it through that instead.
    • running the script in fusion doesn’t automatically install the host install_host(FusionHost()) so for now I’ve also had to force trigger that for the running script to ensure it works as intended. This can be easily implemented in Fusion’s run_script entry point to resolve that.
  • Had to tweak the Blender pre launch hooks to not force the application subprocess kwargs for stdout and stderr to None when it’s running the batch mode. Easy fix.
  • On Windows the executable for maya gets swapped to mayabatch to trigger the script so it doesn’t show a GUI. It is assumed for the app launch that the mayabatch lives next to the maya.exe - this could be tricky if a studio points the app launches to e.g. a shortcut or .bat file instead where mayabatch does not live next to it.
  • For Houdini the houdini executable gets swapped with hython to trigger the scripts without GUI. Same assumption is made that hython lives next to the houdini executable the app is set to launch with.
  • Maya review publishes will not work in this mode because playblasting does not work without a GUI. (Houdini reviews do seemed to work however.)
  • I’ve refrained from passing the arguments for the publish script via command line arguments directly to the app for the script to be picked up but instead opted to pass environment variables to the app launch for the scripts to pick up. Some applications were picky with what command line flags you could still pass to the host application without it actually influencing the app.

Notes

  • Blender and Maya still use the old publisher, Houdini and Fusion work with the new publisher. As such, this has been tested with both publishers and seems to work for both in my simple tests.
  • Whenever an error occurs in the publish, the publishing directly stops and starts continuing towards the publish “post scripts” to trigger and ending the application. This is unlike the publish GUI which for example would continue other validations, etc. Again, this is solvable to match the publish process with the GUI but not yet implemented.
  • Since each publish script is the same for all hosts and just runs within the Python session for that particular host I think any feature we add there, like only publishing certain instances of the ones available in the scene or enabling/disabling certain validations should work for all hosts once implemented.
  • I disable “open last workfile” explicitly to ensure all hosts act the same (some hosts don’t support “auto open file” and others do but don’t support it together with the batch mode). To be consistent I’m explicitly only running the host and allowing to run a script against it. It’s this script itself which triggers host.open_file.
  • I’ve also added an entry point for nuke, nukestudio and nukex which should work (if you can trigger a script for them with the -t command line flag. I just don’t have Nuke to test that out myself. But I expect if that’s the right flag that publishing would also work.

Next step

Clean up the code and put it in a repository somewhere so others can play with it. However, I’m still not sure in what format I should put that. And likely it’s best offered as a separate Addon/Module from OpenPype. Or what’s the most sensible path forward? @iLLiCiT @milan @Tilix4 any preference regarding that?

1 Like

I’ve pushed the prototype into this OpenPype launch scripts addon repository on Github.

@Tilix4 let me know your first thoughts. @fabiaserra maybe you might have some interest here too and have some additional notes.

Currently it allows simple workfile publishing and running custom scripts against Maya, Houdini, Nuke and Blender in headless mode. It also supports Fusion currently but that unfortunately currently will force itself to be in GUI mode in the way I had set it up for this prototype.

I’ve also done some test runs throwing the same commands through Deadline as command line scripts and those worked fine to publish on farm too.

2 Likes