Nuke - Farm Reviews with Distributed Extract Review Data Mov

Making reviews using deadline is now three step process. The main render, the extract review data mov that makes Prores 4444 from the render, and extract review that transcodes the Prores to desired output.

First step can be distributed to many machines, and third step if often not too slow. The second step is however running on single machine, and can easily take much more time than distributed main render.

Thinking loud about a way to speed up the process by distributing the middle step to more Deadline workers.

One way would be to use ExtractReviewDataMov to render to the file sequence instead of Prores file. Maybe that is already possible? This way, we can distribute the Nuke render to many workers.

Another way would be to render to intermediate file sequence at the same time as the main render render. Maybe even just another (one or more) write node in OpenPype render group? This will have another benefit, worker that renders the main render has all the script resources in the memory, and there would be no need for another job in the deadline job group.

Using oiiotool instead of Nuke looks to be limiting right now. Color conversion is there, but resizing and effect baking (typically CDL + LUT) is missing.

Rendering to sequence has issues. Image file sequences do not hold metadata crucial for editorial, like source timecode and reel-id. Those were stored in extractreviewdatamov by Nuke, if present in exrs. We would need a way to push source timecode from exr to reviews. Also, rendering to image sequences should have a configuration, similar to Nuke ImageIO settings, for some quality flexibility. Maybe Jpeg: quality 80 (Very fast and great for h264), or png 16bit (for “final” outputs).

Do you think changing the way Nuke bakes to mov to sequence is worth the trouble?

Could we get some more context on why the extract review data mov (prores 4444) is being done?

I think originally OpenPype was setup to bake to Quicktime first to use Nuke for baking viewer process, plus Prores has decent Size/Quality ratio and stores timecode and reel nicely.

Digging thru the project settings, I see there is now a write node file type (project_settings/nuke/publish/ExtractReviewDataMov/outputs/morebaking/extension), but code seems to still expect mov container, it fails on trying to write timecode if I test the jpg extension:

Now with OIIO, we have a tool to make “color conversions”, but baking viewer process in Nuke (to prores or not) has advantage of applying arbitrary effect published from Hiero (typically used for looks, ie conversion to log → apply CDL → apply LUT), and arbitrary chain of transforms (or any other Nuke nodes).

Yeah, thought it might be for Nuke specific viewer processes.

An alternative solution would be to split the baked Quicktime into smaller frame ranges, then combine at the end with ffmpeg -concat. We could maybe specify a frame range size when the splitting start to happen.
Have you noticed when its starts to slow down and how many frames that is?

Hello,

I have been dealing with a similar problem in my studio. Many times we need to deliver to our clients using ProRes4444 (or any other format they request really, usually multiple at once) and generating these reviews can be quite a pain if we do it on every publish. Because of this, I have been working on allowing to run the OpenPype extract + review plugins on-demand in the farm.

First, the way we are generating these reviews isn’t like @jiri.sin is. We aren’t using the ExtractReviewDataMov Nuke plugin, we are using a combination of Extract OIIO Transcode and Extract Review. With the former, we generate the different transcoded sequences with oiiotool that we later use for creating the reviews with ffmpeg.

Here’s a screenshot of a part of the settings on Extract Oiio Transcode:

Then on Extract Review, we have an h264 representation that we use to upload to Shotgrid:

But also we have other output definitions for prores422 and prores4444 and so on:

You will notice how I make use of custom tags in order to choose which transcoded sequence to use for what.

With this setup, I am able to generate the client review data automatically for every publish of the artists… however, that’s not ideal as it’s quite a lot of overhead and as you pointed out on this other thread Nuke - Publish on Farm with Existing Frames, not being able to submit “Use existing frames” to the farm makes this process quite painful.

As a quick fix to allow artists to publish without the overhead of transcoding + generating the big reviews on every publish I added an extra checkbox in the publisher so they can manually choose whether they just want a review for Shotgrid or they also want it to generate the ProRes reviews:
image

This offsets a bit the pain for now but as I said at the beginning of this, we would rather have a different way to trigger the review generations on-demand. The review generations + delivery is usually handled by another department (editorial) and not something our compositors or other artists should care about.

As for the publish (transcode + extract plugins) on-demand in the farm I have been thinking on doing it through one of these frameworks (which I could later automate or wrap on a CLI once the framework is in place):

  • Add support to submit publishes to the farm from Traypublisher. That way I could for example use the render family to pick the files from an existing publish as representations + reviewable representations and delegate the rest of the publish workflow to the farm
  • Use the Source (read) publish type and also add support to publish in the farm with the proper plugins enabled
  • Manually create the json file required to run a headless publish of a given source and submit to the farm (this is what the other two would already kind of be doing without me needing to reverse engineer how to set this up)

Anyway, hopefully someone from the Ynput team can shed some light to this

1 Like

I guess that depends on studio setup. The resolution above UHD together with more than ~400 frames starts to be very slow for us (very normal for series work).

I am scared about concat, had very bad experience with it. But might work for Prores files with no audio.

I hear you.

I would love to use OIIO instead of Nuke for review processing, but:

  • I have no control about source timecode, required for some jobs
  • I would need to make a OCIO config ether per shot, or with env vars to apply correct combinations or color space, LUT and CDL. Technically possible, but much more difficult for our IO dept compared to publishing an effect sandwitch from Hiero
  • I would need a way to apply crops and transforms via OIIO, with ability to change it per shot

We had a transcode system in place for Pype v2, which I’m hoping to port to v3 soonish. It would transcode to destination video format/codec without audio and then concat all the chunks together with the audio.
Had issues with audio on the chunks as well; #9052 (Concat frozen frames with audio) – FFmpeg. Was easier in the end to just transcode the chunks without audio then add the audio at the end with concat.

2 Likes

To chip in, I think the biggest issue so far is that the slate generation, transcoding and burnins aren’t designed from an API standpoint but feel like they are directly implemented in the Pyblish plugins, which make them very hard to re-use (and test!) outside of that functionality.

The first effort I think is to try and make simple entry points to those functions in such a way that they are easily reusable elsewhere.

Here are some example entry points (likely overly simplified; also was to lazy to really continue working these out completely)


def generate_slate(destination, data):
    """Generate a slate at the destination using the provided data.

    It depends on the slate generator to define what data is required for the slate to
    be creatable and what data is optional. But preferably different slate generators
    share the same API to allow easy interchanging of slate generators.

    This generator only generates the slate still or animated sequence (or video)
    which are usually prepended to a shot or edit as a (sometimes animated) slate.

    Returns:
        str: The output filepath on successful generation
    """
    pass

def transcode(source, destination, transcoder):
    """Transcode source to destination path using the given transcoder.

    The transcoder should define what it's doing during the transcode, e.g.
    colorspace shifts, specific codecs, etc.

    Arguments:
        source (str): Source path
        destination (str): Destination path
        transcoder (Transcoder): Transcoder instance to perform transcoding

    """

def burnin(source, destination, data);
    """Burnin data on top of image data of source, saved to destination """
    pass

Once we have a simple re-usable API it already becomes much more trivial to implement e.g. a “Loader action” on a particular publish to just trigger burnin or alike with certain data of the publish. It might not have full access to e.g. the animation focal length of the source file, etc. that Maya and Houdini provide for reviews but at least it allows to quickly generate a burnin. The same goes for transcodes, etc.

It also opens up to e.g. only submit particular publishes to generate the transcodes on the farm (since the API entry point becomes much more trivial than requiring a regular publish from instance to output data, etc.)

A harder thing here I think is that there will be cases you want to add a slate + burnin and the right transcode. The question then becomes what are the intermediate files during each step and what is the optimal solution to generate the preferred outcome. A process where burnin + transcoding is done in one step is of course more efficient then adding a burnin and then transcoding but it might be much harder to maintain code-wise?

So the way I’m seeing it is much less of being able to trigger the Extract Review plugin in isolation but more so that the basic API interface is exposed to even do so without a publish. The plugins should then be merely a call to the API function in its simplest form.

2 Likes

Yes, we should be using EXR for the intermediates + some sidecar json file for passing the metadata. Presumably this should not be a difficult enhancement of the feature so it supports sequence rendering. Prores4444 was working for us for some time, but I have noticed recently that there is definitely great overhaul now with generating review files.

I really like this workaround and it reminds me somehow a feature we were discussing here a while go. So instead of Client review it would be something like Remote publishing. I will explain it more in depth. At the moment we are setting the Nuke render instance publishing either as Existing, Local, Farm target and the process of publishing with review file generation is either Local or Remote. So if we are publishing Existing and Local it is rendered and published at the local machine and if it is set to Farm all is submitted to Remote.

We could perhaps split those processes and move the overhaul of generating reviewable files and integration into the pipeline to always be processed on Remote. This would be covering the topic here too Nuke - Publish on Farm with Existing Frames

1 Like

I’d say that’s a potential separate step - first step is separating it out to begin with so someone could trigger it without the need of very complicated entry point of instance data. :wink:

1 Like

+1, had to do some ugly hacks to get this working and I still find it too complicated to run the remote publish remotely without any host. If we can simplify that process, it would open up a lot of workflows. For example I was thinking the other day it would be nice for us to be able to trigger a publish as a Deadline command given a simple sequence output of a render job.

Hey everyone,

We’re finishing up ia plug-in that does practically the same as extract oiio transcode but with the added ability of reading the full effect chain from hiero/nukestudio and render it out.

We started it on 3.14 so we didn’t notice there was already a plug-in that could extract oiio stuff inside the new 3.15 codebase, ooops!

Basically it reads the effectPlateMain, splits repo and color operations, does matrix calculations on the fly for transform node concats, create a temp ocio config based on the studio wide one with a look named after the context (shot) and renders out a new sequence with all this baked in in whatever final resolution needed.
We also added the ability to add burnins on top using oiio just to try that out, but that would be probably be better left to the current ffmpeg based one. Still the classes are there.

Right now it’s implemented as a pure python package with no integration in op. Reason for that is that we would like to actually create a new family which is not “effect” but something akin to “reviewSetup” to store the transform data, the custom per shot config.ocio, and in the future being able to export that from resolve (nukestudio is way too much pricey).
The aim was to have published overrides of ocio configs directly available on dcc viewports and have them versioned for traceability.

ExtractReview is very extensive and stable, but it feels to me that using ffmpeg to do imaging is not the way forward, and your efforts on the extract oiio transcode validate this thought, especially since that can be sent to farm pretty safely while ffmpeg not so much (yes you could chunk it but then you need to use exrs and color control flies out the window).

If it’s of anyone interest I could probably upload all that stuff on a repo and share with you guys, the op integration is gonna take some time from me to finish since I’m really low on time these days. Maybe you could get a head start? Especially the repo stuff, which was a bit of a hassle to do since I coded it in pure python to avoid numpy imports.
The reposition itself works flawlessly in both oiio using the warp filter and ffmpeg using the perspective filter.

Let me know your thoughts.

Thank you all for thoughts and support for this, much appreciated!

@max.pareschi the “effectPlateMain to oiio” sounds really, really great. Not sure how you pick the right effect if there is more than one?

On a side note, I found that publishing effect from Hiero with color and transform operations irritate Nuke artists quite a bit: the size change while using viewer IP is something “new and unexpected”, so I tend to publish “color only”. Maybe publishing “color only” and “color + transform” effects would be a way?

Hello @jiri.sin and @jakub.jezek , here is the code for the repo and color setup we are doing in house. I’ve cleaned up as much as I could, but it probably messy so beware :skull_and_crossbones:

maxpareschi/DailiesMan

It should be pretty straight forward to integrate, I would have done it myself if I had more time available.

Let me know if you need anything else!

1 Like

hey @max.pareschi thanks for sharing this. I would like to test it. Is there some documentation or instructions on how to approach it?