To chip in, I think the biggest issue so far is that the slate generation, transcoding and burnins aren’t designed from an API standpoint but feel like they are directly implemented in the Pyblish plugins, which make them very hard to re-use (and test!) outside of that functionality.
The first effort I think is to try and make simple entry points to those functions in such a way that they are easily reusable elsewhere.
Here are some example entry points (likely overly simplified; also was to lazy to really continue working these out completely)
def generate_slate(destination, data):
"""Generate a slate at the destination using the provided data.
It depends on the slate generator to define what data is required for the slate to
be creatable and what data is optional. But preferably different slate generators
share the same API to allow easy interchanging of slate generators.
This generator only generates the slate still or animated sequence (or video)
which are usually prepended to a shot or edit as a (sometimes animated) slate.
Returns:
str: The output filepath on successful generation
"""
pass
def transcode(source, destination, transcoder):
"""Transcode source to destination path using the given transcoder.
The transcoder should define what it's doing during the transcode, e.g.
colorspace shifts, specific codecs, etc.
Arguments:
source (str): Source path
destination (str): Destination path
transcoder (Transcoder): Transcoder instance to perform transcoding
"""
def burnin(source, destination, data);
"""Burnin data on top of image data of source, saved to destination """
pass
Once we have a simple re-usable API it already becomes much more trivial to implement e.g. a “Loader action” on a particular publish to just trigger burnin
or alike with certain data of the publish. It might not have full access to e.g. the animation focal length of the source file, etc. that Maya and Houdini provide for reviews but at least it allows to quickly generate a burnin. The same goes for transcodes, etc.
It also opens up to e.g. only submit particular publishes to generate the transcodes on the farm (since the API entry point becomes much more trivial than requiring a regular publish from instance to output data, etc.)
A harder thing here I think is that there will be cases you want to add a slate + burnin and the right transcode. The question then becomes what are the intermediate files during each step and what is the optimal solution to generate the preferred outcome. A process where burnin + transcoding is done in one step is of course more efficient then adding a burnin and then transcoding but it might be much harder to maintain code-wise?
So the way I’m seeing it is much less of being able to trigger the Extract Review plugin in isolation but more so that the basic API interface is exposed to even do so without a publish. The plugins should then be merely a call to the API function in its simplest form.