This video is an exploration of a hybrid Unreal Engine + AI filmmaking workflow.
I wanted to see whether generative AI could be integrated into an Unreal Engine pipeline in a way that expands creative freedom, without giving up authorship over performance, camera, lighting, or environments.
To test this, I spent hundreds of dollars stress-testing Kling’s new generative video models, including Kling o1 and the newly released Motion Control feature in Kling 2.6, while combining them with motion capture, MetaHumans, and Unreal Engine 5.
This includes experiments with wardrobe swapping from reference images, real-world footage versus Unreal Engine environments, AI hallucinations, lip-sync limitations, motion control, wide-shot hacks, crowd simulation, fight choreography, and the uncanny valley.
