Multiple editorial with shot centric approach

Having a fast, reliable way of timeline assembly from current shot versions is crucial for judging continuity.
The process has to be as friction free as possible, easy and fast. Ideally automatic for producing timeline dailies.

Playing the current state of edit

In better world, there would be no need to render a timeline; an OTIO compatible player would be able to show the current state of the timeline on the fly.

Publishing (rendering) current state of edit

Production reality often dictates the need for timeline render to some distribution format like H264.
Timeline render can be sped up considerably by using reviews generated during the shot publish, instead of full res published files.
There should be a way to assemble edit from versions (the high res) or their reviews.

Timeline relevant shot properties

The shot properties relevant for the timeline assembly are

  • reposition (timeline resolution)
  • shot colorspace and look (ideally in the timeline colorspace)
  • shot edit (head and tail, possible speed change)
  • letterbox and / or pillarbox
  • maybe some metadata like timecode and reel-id to help the auto assembly (for hosts like Resolve)

Having a way to bake all the properties above to the review would not only allow the artist to judge the shot output relevant for each edit, but also make assembling edit as easy as splicing the reviews one after another. Plus it would be easy to have a per shot burnins in the timeline.

How and where to store shot properties relevant for timelines

Question is how to store the timeline/edit relevant info for shots. Is it better to have a reposition, edit (trim & retime), letterbox and colorspace / look subset for each timeline stored in edit, or maybe just somehow link edits to used shots, and pull the info from timelines? Most workflows are shot centric, but timeline / edits change frequently.

Question is how to store the timeline/edit relevant info for shots. Is it better to have a reposition, edit (trim & retime), letterbox and colorspace / look subset for each timeline stored in edit, or maybe just somehow link edits to used shots, and pull the info from timelines? Most workflows are shot centric, but timeline / edits change frequently.

This is a good point. Ideally editorial would be its own version (OTIO) where the shot gets its information from due to frequent timeline updates. But then the question becomes about how to pair the shot name/ID with position in the timeline. Got any thoughts around this?

editorial would be its own version

:100:

how to pair the shot name/ID with position in the timeline

IMHO the timeline (re) assembly from published shots might be done without pipeline, by normal host conform or by hand. But always from already published shots loaded to the timeline host (Hiero, Resolve…). OP will know if timeline uses the shot, and when publishing version of the timeline, each shot used in the timeline would get timeline reference list updated.

I am ignoring the possibility of more than one shot occurrence in the timeline.

There would need to be a mechanism for removing timeline references from shots that are removed from higher timeline versions.

I envision that instead of linking shot assets to editorial assets, it should be shot assets to editorial versions. This way you’ll avoid the need for removing shots from higher level timelines.
Not sure there would be a need for removing any data this way either cause a shot asset would just have a list of editorial versions it’s apart of.

In you illustration, you have building of new timelines with shot versions. How come this is needed?
In my head i see timelines as just shot durations. Whether you use the preview or high quality version can be decided on demand later when rendering or viewing the timeline.

I presume, during the timeline creation of longestShots - which is aggregation from all available timelines (those had been selected for EDL scan) - we would need to add dedicated Tag which would keep following info:

{
	"linkedTimeline##": {
		"timelineStartTime "00:01:00:00",
		"timelineRate": 24.00,
		"timeline": "edit90sec",
		"layer": "main",
		"layerStartTime: "00:01:00:18"
		"clipOrder": 4,
		"sourceStartTime: "01:00:33:15",
		"sourceEndTime": "01:00:34:22"  
	}
}

The EDL scan would be done by script, provided within the OpenPype menu. Best practice would be to run it on a particular folder of sequences - this way we would awoid user error during multiselection.

The info for the linkedTimeline tag will be available during the EDL scan anyway and we could use the data for rebuilding timeline on the fly without ever need for storing other then longestShots OTIO timelines. This metadata related to the edit90sec could be then stored on shot/asset db doc under data.editorials and also in OTIO file clip metadata.

Data in shot db doc data would be usefull for the review/delivery representation extracting, but also later for any updates of actual timelines.

After Compositors finish their renders we could go into the edit90sec timeline in Hiero and then In loader have particular plugin to load into actuall timeline with use of editorial data (ex.).

Updating the linkedTimeline data model, since this way we would also need to add effects for retime and repositions. Also I had realized better way would be to add frame in and duration - as it is done natively in OTIO reather then source start/end. But we will still need to preserve the source start time, because that could potentially change - studio’s data management department might update the amount of frames during production.

In following example, you can see two retime effects added - that could be quite common to layer retimes. Important to say the layering needs to be done in order, that is why the list type. So first retime is constant speed retime, the second is timewarp with curve animation. Notice there is missing interpolation yet - that is definitely something to discuss.

Then I have added reposition effects - and I would keep those two retime and reposition in separation, because we will need to layer those two differently. The example shows animation on two animation keys at linkedTimeline##.clipEffects.repositions[0].offset_min attribute.

So here is the new updated model:

{
	"linkedTimeline##": {
		"timelineStartTime": "00:01:00:00",
		"timelineRate": 24.00,
		"timeline": "edit90sec",
		"layer": "main",
		"layerStartTime": "00:01:00:18",
		"clipOrder": 4,
		"sourceStartTime": "01:00:33:15",
		"sourceStart": 15,
		"sourceDuration": 115,
		"clipEffects": {
            "retime": [
                {
                    "OTIO_SCHEMA": "LinearTimeWarp.1",
                    "effect_name": "speed",
                    "name": "speed",
                    "time_scalar": 1.5,
                    "metadata": {
                        "inputSourceTime":  "01:00:33:15"
                    }
                },
                {
                    "OTIO_SCHEMA": "TimeEffect.1",
                    "effect_name": "TimeWarp",
                    "name": "TimeWarp",
                    "metadata": {
                        "inputSourceTime":  "01:00:33:15",
                        "frames": {
                            "0": {
                                "speed": 1.00
                            },
                            "20": {
                                "speed": 0.50
                            }
                        }
                    }
                }
            ],
            "repositions": [
                {
                    "available_image_bounds": {
                        "OTIO_SCHEMA": "Box2d.1",
                        "min": {
                          "OTIO_SCHEMA":"V2d.1",
                          "x": 0.0,
                          "y": 0.0
                        },
                        "max": {
                          "OTIO_SCHEMA":"V2d.1",
                          "x": 16.0,
                          "y": 9.0
                        }
                    },
                    "offset_min": {
                        "inputSourceTime":  "01:00:33:15",
                        "frames": {
                            "0": {
                                "x": 0.0,
                                "y": 0.0
                            },
                            "20": {
                                "x": 1.22,
                                "y": -0.55
                            }
                        }
                    }
                }
            ]
        }
		
	}
}

Here is the OTIO spatial coordinates which would be great to implement at repositions. I believe even animated repos should work.

https://opentimelineio.readthedocs.io/en/latest/tutorials/spatial-coordinates.html

And it is unit-less. Great!