The best way to Create a Personalized GenAI Video in 3 Easy Steps | by Ruben Broekx | Jan, 2025

Video technology by means of AI is available in numerous types, every with distinctive capabilities and challenges. Most frequently, you may classify a GenAI video into one in all three classes:

  • Movies that includes identified ideas and celebrities
  • Picture-based movies ranging from fine-tuned image-generation fashions
  • Picture-based movies ranging from edited content material

Let’s break down every in additional element!

Movies that includes identified ideas and celebrities

This kind of video technology solely depends on textual content prompts to provide content material utilizing ideas that the Massive Imaginative and prescient Fashions (LVM) already know. These are sometimes generic ideas (“A low-angle shot captures a flock of pink flamingos gracefully wading in a lush, tranquil lagoon. ~ Veo 2 demo proven beneath) combined collectively to create a very genuine video that aligns effectively with the inputted immediate.

Video made by Google’s Veo 2 — immediate: A low-angle shot captures a flock of pink flamingos gracefully wading in a lush, tranquil lagoon. […]

Nevertheless, a single image is price a thousand phrases, and prompts are by no means this lengthy (nor would the video technology pay attention even when this was the case). This makes it practically unimaginable for this method to create constant follow-up pictures that match collectively in a longer-lasting video. Search for instance at Coca-Cola’s 2024 absolutely AI-generated commercial — and the shortage of consistency within the featured vehicles (they modify each body!).

Studying: It’s practically unimaginable to create constant follow-up pictures with text-to-video fashions.

Coca-Cola’s absolutely AI-generated commercial, generated by the “Wild Card” studio — reveals inconsistencies within the generated vehicles.

One — and possibly probably the most identified — exception to the simply talked about limitation are celebrities. Attributable to their elaborate media presence, LVMs often have sufficient coaching knowledge to generate pictures or movies of those celebrities following the textual content immediate’s command. Add some specific content material to it and you’ve got an opportunity to go viral — as proven by the music video beneath from The Dor Brothers. Although, discover how they nonetheless struggled to keep up consistency, as proven by the garments that change in each single shot.

Music video that includes celebrities in AI-generated scenes — “The Drill” from The Dor Brothers.

The democratization of GenAI instruments has made it simpler than ever for folks to create their very own content material. That is nice because it acts as a artistic enabler, however it additionally will increase the possibilities of misuse. Which, in flip, raises vital moral and authorized questions, particularly round consent and misrepresentation. With out correct guidelines in place, there’s a excessive danger of dangerous or deceptive content material flooding digital platforms, making it even tougher to belief what we see on-line. Fortunately, many instruments, like Runway, have programs in place to flag questionable or inappropriate content material, serving to to maintain issues in test.

Studying: Celebrities will be generated persistently because of the abundance of (visible) knowledge on them, which, rightfully, raises moral and authorized considerations. Fortunately, most technology engines assist to watch misuse by flagging such requests.

Runway blocking video technology as a result of it detected a star.

Picture-based movies ranging from fine-tuned image-generation fashions

One other widespread method to producing movies is to begin from a generated picture, which serves as the primary body of the video. This body will be fully generated — as proven within the first instance beneath — or primarily based on an actual picture that’s barely manipulated to supply higher management. You’ll be able to, for instance, modify the picture both manually or by utilizing an image-to-image mannequin. A technique of doing so is thru inpainting, as proven within the second instance beneath.

Learnings:
— Utilizing pictures as particular frames within the generated video present for higher management, serving to you anchor the video to particular views.
— Frames will be created from scratch utilizing picture technology fashions.
— You’ll be able to make the most of image-to-image fashions to vary present pictures that match the storyline higher.

A cat studying a e-book primarily based on a generated picture, self-made usinng Flux for the picture technology and Ruway to show the picture right into a video.
A cat studying a e-book primarily based on an actual picture, self-made utilizing Flux for the inpainting and Runway to show the picture right into a video.

Different, extra subtle approaches embody fully altering the model of your pictures utilizing model switch fashions or making fashions study a particular idea or particular person to then generate variations, as is finished in DreamBooth. This, nevertheless, could be very robust to drag off since fine-tuning isn’t trivial and requires a variety of trial and error to get proper. Additionally, the ultimate outcomes will at all times be “nearly as good as it might get”, with an output high quality that’s practically unimaginable to foretell at the beginning of the tuning course of. However, when achieved proper, the outcomes look superb, as proven on this “real looking Simpsons” video:

AI-generated video “The Simpsons — 1950’s Tremendous Panavision 70” created by demonflyingfox.

Picture-based movies ranging from edited content material

A final possibility — which is what I largely used to generate the video proven on this article’s introduction — is to manually edit pictures earlier than feeding them into an image-to-video generative mannequin. These manually edited pictures then function the beginning frames of the generated video, and even as intermediate and last frames. This method affords important management, as you’re solely sure by your individual modifying abilities and the interpretative freedom of the video technology mannequin between the anchoring frames. The next determine reveals how I used Sora to create a segue between two consecutive anchor frames.

Studying: Most video technology instruments (Runway, Sora, …) permit you to specify beginning, intermediate, and/or ending frames, offering nice management within the video technology course of.

Transition from a beginning to an ending body, self-made utilizing Flux 1.1 Schnell to generate the 2 backgrounds, and Sora for the video technology. Observe that Sora generated the top-view of the robotic within the second body of the video — a “pleased accident” because it fitted effectively.

The nice factor is that the standard of the edits doesn’t even should be excessive, so long as the video technology mannequin understands what you’re attempting to do. The instance beneath reveals the preliminary edit — a easy copy-paste of a robotic onto a generated background scene — and the way that is remodeled into the identical robotic strolling by means of the forest.

Studying: Low-quality edits can nonetheless result in high-quality video technology.

AI-generated video primarily based on a (badly) edited picture the place a robotic is solely pasted onto a scene, self-made video generated utilizing Runway.

Because the generated video is anchored by the self-edited pictures, it turns into considerably simpler to regulate the circulation of the video and thus be certain that successive pictures match higher collectively. Within the subsequent part, I dive into the small print of how precisely this may be achieved.

Studying: Manually modifying particular frames to anchor the generated video permits you to create constant follow-up pictures.