Meta’s new “Movie Gen” AI system can deepfake video from a single photo

On Friday, Meta announced a preview of Movie Gena brand new suite of AI objects designed to create and manipulate video, audio, and photographs, including making a sensible video from a single photo of a particular person. The corporate claims the objects outperform a entire lot of video-synthesis objects when evaluated by humans, pushing us nearer to a future the place somebody can synthesize a pudgy video of any topic on place apart a question to.

The corporate does no longer but have plans of when or how this would possibly release these capabilities to the public, however Meta says Movie Gen is a instrument which will enable of us to “enhance their inherent creativity” apart from replace human artists and animators. The corporate envisions future applications comparable to without concerns creating and editing “day in the life” movies for social media platforms or producing personalized engrossing birthday greetings.

Movie Gen builds on Meta’s earlier work in video synthesis, following 2022’s Impact-A-Scene video generator and the Emu characterize-synthesis model. Using textual hiss prompts for guidance, this newest machine can generate custom movies with sounds for the first time, edit and insert modifications into novel movies, and transform images of of us into sensible personalized movies.

An AI-generated video of a chunk of one hippo swimming around, created with Meta Movie Gen.

Meta is no longer the most efficient game on the town when it comes to AI video synthesis. Google confirmed off a brand new model known as “Veo” in Could well simply, and Meta says that in human need assessments, its Movie Gen outputs beat OpenAI’s SoraRunway Gen-3and Chinese language video model Kling.

Movie Gen’s video-generation model can create 1080p high-definition movies as much as 16 seconds long at 16 frames per 2d from textual hiss descriptions or an image enter. Meta claims the model can address complicated ideas admire object motion, topic-object interactions, and camera movements.

AI-generated video from Meta Movie Gen with the if truth be told helpful: “A ghost in a white bedsheet faces a mirror. The ghost’s reflection can be seen in the mirror. The ghost is in a dusty attic, filled with old beams, cloth-covered furniture. The attic is reflected in the mirror. The light is cool and natural. The ghost dances in front of the mirror.”

Even so, as we’ve seen with earlier AI video mills, Movie Gen’s skill to generate coherent scenes on a remark topic is probably going depending on the ideas repeat in the example movies that Meta customary to put collectively its video-synthesis model. Or no longer it’s price conserving in mind that cherry-picked results from video mills customarily fluctuate dramatically from accepted resultsand getting a coherent consequence would possibly well require quite loads of trial and error.

Be taught More

Scroll to Top