Huge lag on loading workfiles in launcher

Hello fellow AYON users and contributors,

I’ve recently staged an update for our studio to use the latest core 1.8.7, which includes the workfiles in the launcher window. In testing we’ve discovered that this can make the launcher window lag heavily when jumping into a project not previously loaded. This repeats if the tray application is closed. Once the workfile fetch is complete, the application behaves normally across that project. I’ve attached a video showing the issue

Is any other team experiencing this issue ?
We;'re running Launcher version 1.5.2, Core 1.8.7 and server version 1.14.5

My instinct is that the workfile load is blocking, and that fetching those filenames from the template resoultion is taking an unusual amount of time, and to patch core to lazy-load them on a background thread - but I would like to avoid this if there is a known solution,

Let me know if we’re missing anything like a config item or version mismatch.

Thanks so much,
Kevin Poli - Senior Pipeline TD, Hornet

VIdeo Example

Hi Kevin,

Thanks for the report and screen recording. I’m slightly confused why this would be this slow.

The listing of the workfiles indeed is not done in a thread currently - it could be, but I have not seen it ever be noticeably slow. Even with hundreds of workfiles in a task it’s pretty much instant for me.

I’d personally try and investigate which of the queries here is that slow.

Try isolating the performance of the:

project_name: str = "test"
task_id: str = "xyz"
for entity in ayon_api.get_workfiles_info(project_name, task_ids={task_id}, fields={"id", "path", "data"}):
    print(entity)

If that’s quick it must be any of the other bits… but why? No idea.

Hey Roy, smippet ran super quick. I’m going to check the path.exists calls across other storage devices at the studio, but I am equally puzzled as this should be a very fast operation

Just an update, I’ve found the issue and it appears to be serious, new lag in fetching the project Anatomy


import time, ayon_api
from ayon_core.pipeline import Anatomy

project_name = "placeholder"
task_id = "placeholder"

t0 = time.time()
anatomy = Anatomy(project_name)
print(f"Anatomy: {time.time()-t0:.3f}s")

t0 = time.time()
items = list(ayon_api.get_workfiles_info(project_name, task_ids={task_id}, fields={"id", "path", "data"}))
print(f"API call: {time.time()-t0:.3f}s")
print(f"{len(items)} items")
--------------------
Anatomy: 28.152s
API call: 0.119s
11 items```

I tested your snippet on two different projects on my local AYON server.

If you are on Linux, do you have automounts ?
Outside of Ayon, I recall some path.exists slowness when there was a failing automount (on an unrelated drive than the one that was needed for path.exists), which needed to timeout for each call.

Thanks so much for testing, on our previous version template resolution was around 0,4 sec, which is still somewhat slow but manageable. are there any operations within the anatomy resolution I should time in specific to keep isolating ?

Tbh, I don’t know.


But, I’ve two thoughts to share if they can help:

  • With big number of concurrent requests to the server you may face some lag. that’s why we have this notes on documentation: Customize Site ID and Optimizing Deadline Farm Render Nodes in AYON.
  • As @Yul hinted, this may be something related to the deployment. I don’t think there’s a checklist to go through one by one as a troubleshoot guide as this can be different from a studio to studio. personally, I faced a very bad network performance when tested hosting my sandbox AYON server on windows as shown in Quick AYON Deployment on Windows.