Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
I mean actual seed, not prompt. If you're using something like Stable Diffusion it gives a seed number with each image. Using the same prompt and seed number gives exactly the same image.
Only the ones you can't run locally. Most people still use Stable Diffusion because it's the most powerful and open, and that lets you create anything you'd want and allows you to train it on whatever you'd want. You can make a model based on 500 images of Kirby and it can make similar-looking images with the same art style.
This argument never made sense to me. If I'm drawing it, I put in the effort and made it with my own hands. AI image generators can mass-produce images. Not to mention that they're based off other people's work, not yours. It's not the same.
Fair points on the locally run AIs, I admit I don't have experience with those and didn't realize they were run differently. I defer to your knowledge there.
I disagree on the drawing point though. Nearly every artist learns their style by learning from other artists, in the same way that every programmer learns to code by reading other code. It IS different, but I don't think it's THAT different. It's doing the exact same thing a human would do in order to create a piece of art, just faster, and automated. Instead of spending ten years to learn to paint in the style of Dali you can tell an AI to make an image in the style of Dali and it will do exactly what a human would - inspect every Dali painting, figure out the common grounds, and figure out how to replicate them. It isn't illegal to do that, nor do I consider it immoral, UNLESS you are profiting from the resulting image. Personally I view it as a fair use of those resources.
The sticky situation arrives when we start to talk about how those AIs were trained though. I think the training sets are the biggest problem we have to solve with these. Train it fully on public domain works? Sure, do what you want with it, that's why those works are in the public domain. But when you're training your AI on copyrighted works and then make money on the result? Now that's a problem.
As an artist you do not look at how 300 other artists have drawn a banana, you look at a banana and try to understand how you can use different techniques to capture the form, texture, etc. of a banana.
An AI calculates from hundreds of images the probability of lines and colours being arranged in a certain way and still being interpreted as a banana. It never sees a banana or understands what it is.
Tell me, where do you see a similarity in these two processes.
I don't get it. How is the seed different from the actual data of the picture then?
The seed is more like an address. It's a number that gets paired with the prompt to tell the model what variation of the thing it should output. Given the same seed and same prompt, the model will output the same image every time, no matter what.