A Practical Test with Generative Video Tools
We encountered a potential use case while working with a non-higher ed client, an anti-human-trafficking organization for whom we produce survivor stories annually for their year-end campaign. Survivors often lack documentation of their darkest moments, requiring creative b-roll for retelling their stories.
In one case, a survivor, recounting her abuse as a child said something along the lines of “people kept coming into my room at night.” We thought this would be an appropriate use for an AI-generated series of b-roll clips because the moment was a memory from the past, and a dark one at that, which lends itself to being a little abstract and cryptic.
Utilizing RunwayML we tried again and again with various prompts and source material for reference to develop even one usable b-roll clip. After about an hour, we gave up and went a different route. The clips the program returned were not accurate to what we were asking, and the visuals kept morphing and doing unnatural things as the shot progressed, which is extremely common with the current state of generative video capabilities.
To get anything usable we would have had to approach it like Toys “R” Us and Carvana did by bringing in artists and animators to create visual assets from scratch to pair with the AI-generated ones. In that scenario, it would be faster and way more cost-effective to just hire actors and shoot the scene we wanted in a live-action format.