Lineup tool for openpype published previews

It would be great if a lineup video could be generated with the latest available versions, which are published in OpenPype. So it creates different categories like fx playblasts, animatin playblasts, LGT precomps, and comp previews. I personally find it very useful.

I would like to know more thoughts on this.

Hey @Krishna_Avril

Could you elaborate on what a lineup video is?

A lineup video is a single video playback with the latest version of published shots.

Is it continously generated or on-demand?

Are the versions cut together in an edit or is it any latest version?

This is larger feature request that’ll need a good bit of breaking down into smaller tasks.

It can be generated on demand. It is very helpful to have a look at the progress or status of the entire sequence in one go.

It can pick the latest version published in respective tasks like animation/FX/Lighting Precomp/ Composition

Some additional questions.

  • How would you decide which published subsets to include for a specific task?
  • Would you always want ALL reviews of all the tasks, or just recent ones? If so, how would you decide?
  • What’s the reason you want a merged video and not just a quickly viewable “playlist” of recent versions (or even move them into e.g. a review folder, or upload them to a review tool of your choice like in a production tracker like Kitsu, Ftrack or Shotgrid - which is what OpenPype can already do?)

Below are some potential examples working towards what you are looking for from a code perspective. Maybe of interest to some.

Retrieve latest representations and concatenate files with FFMPEG

Here’s an example API logic of how to potentially retrieve the relevant review files.

import tempfile

from openpype.client import get_subsets, get_last_versions, get_representations
from openpype.pipeline import legacy_io, get_representation_path
from openpype.lib import (
    get_ffmpeg_tool_path,
    run_subprocess,
)

def simple_concat_videos(paths, destination):
    """Combine multiple videos with same codec into one file"""
    ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
    
    with tempfile.NamedTemporaryFile("w", suffix=".txt", delete=False) as f:
        for path in paths:
            path = path.replace("\\", "/")
            f.write(f"file '{path}'\n")
            
    filelist = f.name.replace("\\", "/")

    # Simple FFMEG concatenation like this assumes the input video files
    # have the same codecs + resolutions otherwise the output might not
    # look the way you'd expect them to. There is no handling of different
    # resolutions, FPS, codecs, etc.
    args = [
        ffmpeg_path, 
        "-f", "concat", 
        "-safe", "0",
        "-i", filelist, 
        "-c", "copy", 
        destination,
    ]
    run_subprocess(args)
    

project_name = legacy_io.active_project()

# This query could be optimized if you know the exact subset names, e.g. `reviewMain` or alike
# since you can pass the `subset_names` argument
subsets = get_subsets(project_name)
subset_ids = {subset["_id"] for subset in subsets}

last_versions_by_subset_id = get_last_versions(project_name, subset_ids=subset_ids)
versions = list(last_versions_by_subset_id.values())
version_ids = {version["_id"] for version in versions}

representations = list(get_representations(project_name, version_ids=version_ids, representation_names=["h264_png"]))

# Filter representations by task name context
#task = "modeling"
#representations = [repre for repre in representations if repre["context"]["task"]["name"] == task]

# Get the paths to the h264_png representations
paths = [get_representation_path(repre) for repre in representations]

destination = "E:/test.mp4"
if len(paths) > 1:
    print(f"Concatenating videos:")
    for path in paths:
        print(f"- {path}")
    print(f"Writing to: {destination}")
    simple_concat_videos(paths, destination)

Disclaimer: this is using the openpype.client api which is currently focused to be work for both OpenPype and AYON and since it’s still targeted to work for both these queries might not be as optimal as you’d like. If e.g. focusing solely on OpenPype with MongoDB I’d have likely made very different queries directly to the representations to solely get the latest representations based on the task instead of filtering them in Python like this logic does - since this is now basically getting ALL subsets and ALL latest versions.

A part of the potentially tricky logic to merging just any files together is the fact that they might mismatch in FPS, codecs, resolutions, etc. and you are basically working your way through a myriad of challenges with encoding that correctly. I might just recommend taking this logic to instead create some sort of “playlist” from the paths to open them in a media player of choice which can be launched with a playlist. The benefits would be:

  • You’re actually viewing the source files, basically ensuring what you see is what you get
  • No need to encode/re-encode, so less delay to view the video file.
  • Within the video player you can then directly trace back to the source file too
  • You can easily re-order the playblast or alike with the video player of your choice (if the player supports that).

Retrieve latest representations and load them in VLC as playlist

Here’s a relatively similar example like before but now instead of creating a merged video file it will load them into VLC in one go - you’ll then have a ready to go VLC playlist to watch. You’ll just need the command line flags to do that for the video editor of your choice and you’re good to go.

from openpype.client import get_subsets, get_last_versions, get_representations
from openpype.pipeline import legacy_io, get_representation_path
import subprocess

    
def get_latest_representation_paths(project_name,
                                    subset_names=None,
                                    task_names=None,
                                    representation_names=None):

    subsets = get_subsets(project_name, subset_names=subset_names, fields=["_id"])
    subset_ids = {subset["_id"] for subset in subsets}
    
    last_versions_by_subset_id = get_last_versions(project_name, subset_ids=subset_ids, fields=["_id"])
    versions = list(last_versions_by_subset_id.values())
    version_ids = {version["_id"] for version in versions}
    
    representations = list(get_representations(project_name, version_ids=version_ids, representation_names=["h264_png"]))
    
    # Filter representations by task name context
    if task_names:
        representations = [repre for repre in representations if repre["context"]["task"]["name"] in task_names]

    # Get the paths to the h264_png representations
    paths = [get_representation_path(repre) for repre in representations]
    
    return paths
    
    
project_name = legacy_io.active_project()
paths = get_latest_representation_paths(project_name, representation_names=["h264_png"])

# Open multiple videos in VLC
VLC_PATH = r"C:\Program Files\VideoLAN\VLC\vlc.exe"
subprocess.Popen([VLC_PATH] + paths)
1 Like

I would love to have a feature like this.

One use case would be commercials. There is a main edit, and several cutdowns, that share the same shots. Making an edit would show how the latest renders work in different cutdowns.

Another use case is continuity check. For a sequence of shots with added cg fog, rendering lineup video would be a great help.

For those examples, I guess the video might be either created on demand (by action?), or automatically run every day, as in Ftrack create daily review session.

Rendering the edit (merged video) instead of making a playlist would be preferable. The rendered video can be shared with clients easily.

Maybe just leting the artist to make the initial edit in some OTIO capable host, using reviews that fit the rendered version intent well.
Like picking a lightweight mp4 with detailed burnin for internal dailies or nice Prores files with no burnin for client daily.

The process would then load published OTIO file, search for versions of the reviews used, switch for latest, and bake an edit review.

1 Like

Precisely, This will specifically help artists, leads, and supervisors.

Please click the Vote option on the top @jiri.sin

Thanks @jiri.sin - that helps explain the situation more. Instead of random representations in any order you instead want to render a timeline, like an edit of sorts, and preview the latest shots (or even only your own shots) over the rest of the latest edit. Which sounds a lot like this OpenTimelineIO: Conform new renders into edit use case.

In essence what we need is:

  1. An edit definition for OTIO, like an EDL which describes what shots need to go where.
  2. Preferably the edit definition (or content therein) contains metadata about the shot or representation so when working with the OTIO data we know which shot is related and from which we could potentially take new review files. That way we know which media we can update in the OTIO timeline. Or we need a reliable way to compute or generate what representation a specific source video is. For example like this which is focused on OpenPype and won’t work with Ayon, but there might be an Ayon equivalent solution?
  3. An API that can render the OTIO timeline into a single video file. As far as I know there’s no ready to go python API that can do this yet?

With just 1 and 2 it’d at least mean we can already start loading the generated OTIO outputs into software that supports it, like e.g. writing an EDL for premiere or resolve and render (step 3) from there manually until we have a command line interface to do so.

Regarding point 3) I got this comment by Daniel Flehner Heen on Academy Software Foundation Slack:

I’m not sure if there’s a go-to way yet, but there are a couple of alternatives for headless render.

  • Darby Johnston’s 's tlrender
  • rvio could possibly render a timeline (I think only single track with straight cuts)
  • I wrote an adapter to create mlt files that could be passed to melt for rendering (might be a bit outdated ).

Review your own shot in context of the previous and next shot of the edit

We’ve actually wanted something similar for animation artists but in a “shorter variant” where instead of rendering the full edit they could choose an edit and it’d prepare their current animation scene (and it latest review output) together with the shot in front and after so they can look at it in context of the edit. Feature wise it needs about the same features as described above, but could be an additional use case to think of.

1 Like

The tlRender mentioned by Daniel is used in my favorite mrViewer now.