this post was submitted on 11 Nov 2023
232 points (94.6% liked)
Asklemmy
43833 readers
780 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I expect the data size to be a problem. Stable diffusion defaults to 512x512px, because it simply requires a lot of resources to generate an image. Even more so to train one. Now do that times 30 to generate even one second of video. I think we need something that scales better.
I fully expect this to work decently in a few years though, no matter how hard the challenge is, ai is moving really fast.
"Fisheye" generation seems obvious. Give the network a distorted view of an arbitrarily large image, where distant stuff scrunches inward toward a full-resolution point of focus. Predict only a small area - or even a single pixel. This would massively decrease the necessary network size, allowing faster training. (Or more likely, deeper networks). It'd also Hamburger Helper any size dataset by training on arbitrarily many spots within each image instead of swallowing the whole elephant.
Even without that, video only needs a few frames at a time. You want to predict a future frame from several past frames. You want to tween a frame in the middle of past and future frames. That's... pretty much it. Time-lapse "past frames" by sampling one per second, and you can predict the next second instead of the next frame. Then the stuff between can be tweened.
Stable diffusion can do arbitrary sizes now, as long as you have the VRAM for it iirc
Of course, but that is precisely the problem. It gets expensive really really fast.