I have had a lot of fun with LTX 2 but for a lot of usecases it is useless for me. for example this usecase where I could not get anything proper with LTX no matter how much I tried (mild nudity):

The video may be choppy on the site but you can download it locally. Looks quite good to me and also gets rid of the warping and artefacts from wan and the temporal upscaler also does a damn good job.
First 5 shots were upscaled from 720p to 1440p and the rest are from 440p to 1080p (that’s why they look worse). No upscaling outside Comfy was used.

workwlow . I could not get a proper link of the 2 steps in one run (OOM) so the first group is for wan, second you load the wan video and run with only the second group active. https://aurelm.com/upload/ComfyWorkflows/Wan_22_IMG2VID_3_STEPS_TOTAL_LTX2Upsampler.json

This are the kind of videos I could get from LTX only, sometimes with double faces, twisted heads and all in all milky, blurry.
https://aurelm.com/upload/ComfyUI_01500-audio.mp4
https://aurelm.com/upload/ComfyUI_01501-audio.mp4

Denoising should normally not go above 0.15 otherwise you run into ltx-related issues like blur, distort, artefacts. Also for wan you can set for both samplers the number of steps to 3 for faster iteration.

Someone on Reddit asked me why not use SeedVR for upscaling. The reason is it makes the artefacts from WAN worse and does not properly keep small details, like in this example. And it’s not even faster. Also compared to RIFE interpolation the LTX temporal upscaler does a much better job and the animation is smooth as butter compared to SeedVR where you can feel the choppyness: