Your thoughts on A.I. art creation

Started by Racoon, Sun 07/08/2022 21:08:14

Previous topic - Next topic

Danvzare

#200
Quote from: cat on Tue 13/05/2025 10:44:43What I don't get: don't all traditional human artists train on existing art? I imagine that art teachers will show a bunch of Picasso paintings to their students and tell them "Now do something similar" and people will look at the pictures and copy parts of it or only concepts into new paintings. Heck, even the old masters learned by just copying other paintings.
Why is it different here?
Good question. And the answer will depend on who you ask.

Some people will say there isn't a difference.
Others will say the difference is that a human can never remember anything perfectly, let alone recreate it perfectly, so when a human trains from something, it introduces biological imperfections.

For me though, I say the difference is awareness. If you understand how GenAI works, you realize it's isn't learning anything. Now don't get me wrong, there's a chance that what they've developed could be used as a small part for a proper AI that is capable of learning from other artwork. But as of right now, it's just a glorified filter that takes a lot of input data. Just get it to generate "trailer screenshot", and look at the perfect "recreation" of actual screenshots from popular movies that came out at the time of the original training data.
I'm not kidding about it being a glorfied filter either. If GenAI is learning, and is comparable to the way humans learn to make art, then what is the difference between GenAI and the nearest-neighbour scaling algorithm, other than the quantity of data that's being input?

It's hard to explain, because as humans, we have a tendency to see something that's imitating life, and believe that life is imitating it. As an example, now that we've invented computers, there's a surprising amount of people who believe we live in a simulation, simply because they don't fully comprehend that we made computers to simulate life, not the other way around.



Quote from: LimpingFish on Mon 12/05/2025 23:20:42As a non-pixel artist (or a least a very rudimentary one), does supposedly "good" AI pixel art still look weird to pro pixel artists? I mean, apart from the usual extra fingers and melting hair, AI pixel art looks somewhat...off to me. Is it because of AI's weakness with shadows and contrast (that flat 50-50 light/dark style inherent to AI), or is it something else? Palette choices? Pixel placement?
I'm far from a pro, but it looks off to me too. For me it's usually the pixel placement. They always place pixels in spots that no one ever would, unless they just scaled down a picture using nearest-neighbour.

Misj'

Quote from: cat on Tue 13/05/2025 10:44:43What I don't get: don't all traditional human artists train on existing art? I imagine that art teachers will show a bunch of Picasso paintings to their students and tell them "Now do something similar" and people will look at the pictures and copy parts of it or only concepts into new paintings. Heck, even the old masters learned by just copying other paintings.
Why is it different here?
I actually think there is a fundamental flaw in this statement. As - at least in my opinion - the old masters did not learn by just copying other paintings. They learned from trying to understand the decisions made by other people in those paintings.

I like exploring other people's styles (as can be seen in many of my Blitz entries). Yet I never copy anything. But rather I try to make it my own and adapt what I see and understand into my own signature. In a way this is also the reason why I tend to be quite slow with my drawings...because every line is intentional. Everything is a decision. Everything has purpose. Even though some things are drawn from muscle memory (and yes, happy accidents do exist in what might appear as random lines to others).

As a result, my work is based on understanding, purpose, and story (my skills are just the 'interface' to put these on (digital)paper). The output of genAI (and people who only copy/trace the work of others) lacks each of these. And without understanding, purpose, and story whatever you create is - in my opinion - mediocrity. It might be mediocrity wrapped up in style over substance. And people might love it. But for me...when I draw something, the end-point is secondary to the road to get there (which is among the many reasons why I tend not to use the word 'art' to my drawings). This is also the reason why genAI won't ever stop me from picking up that pencil (but I can understand why it would be disheartening to professionals who try to make a living; and I hate the fact that genAI is based (almost) completely on the stolen works of others).

So my question would not be: Why is it different here? because to me that is obvious. A much more interesting question would be: How is this similar?

ps. I don't think this is the thread for this discussion. So I've said my piece and will now stick to my drawings again. ;)

LimpingFish

Quote from: Misj' on Tue 13/05/2025 12:46:44ps. I don't think this is the thread for this discussion.

Threads merged!  :)

Anyhoo...

Y'know, we also have a term for people who steal art and pass it off as their own. We call them plagiarists. We don't excuse their behavior, because we fundamentally understand the purpose of stealing artwork; to profit, or gain kudos, for someone else's work. Even if the plagiarist isn't sued, we acknowledge that an artistic violation has occurred and that any art presented by the plagiarist going forward might be tainted, regardless of it's validity. As such, no artist wants to be called a plagiarist.

Prompt writing is not art. There may be a talent to effective prompt writing, but that in no way validates the resulting work as art. And if someone has only starting claiming they're an artist since they discovered AI allowed them to generate content without any discernible talent, they are not an artist. This is not gatekeeping, they are just delusional.

Even if an "ethical" AI could be trained on, say, a single consenting artist's work, it would be essentially worthless without access to a large dataset. And as we've seen, all large datasets are inherently tainted, not only from the point of copyright but morally, as they exploit the work of hundreds/thousands/millions of actual artists.

Generative AI is a grift, which is why it's most vocal proponents are usually grifters. :-\
Steam: LimpingFish
PSN: LFishRoller
XB: TheActualLimpingFish
Spotify: LimpingFish

cat

Yes, AI is trained with large datasets. But this is just how human learning also works. I've never been to Egypt but if you asked me to draw a picture of the pyramids and the sphinx, I could probably make a mediocre drawing of it. Why? Because I've seen tons of photos of it throughout my life and I made a model of it in my brain. Would this count as plagiarism? Hardly, I'd say.
Now, if you asked me to make a more realistic drawing, I'd probably do a Google search for pyramids and use the pictures I find there as reference. Is this plagiarism? Most likely, but I dare to say that most people who do graphics have looked up reference pictures before without giving credit. So is this better than AI?

Another example: Imagine an app to look up birds. You take a photo of a bird, upload it, and the app will tell you that the bird is most likely a European robin. This also has to be trained with lots of data of questionable sources. Would you claim here as well that this is all plagiarism and how can people use such a thing? The data is the same, just the output is different.

LimpingFish

But you are not a robot, and comparing how humans learn to how an AI "learns" is, as @Misj' pointed out, not the answer.

An AI is presented with a image of an object along with a caption telling the AI what the object in the image is. Let's say the object is a tree. Noise is gradually introduced to the image until the original image is completely replaced by noise. The adding of noise is to introduce variability in the information the AI is receiving. Next the AI is presented with new image, except this time the image begins as noise, and the AI is told to "draw" a tree. The initial process is reversed, the AI gradually rebuilding a facsimile of what it "thinks" best represents the instruction it was given, using the information it received during the initial process. It works it's way back from noise to a tree and we are left with a "new" image that looks a lot like the first image, but, thanks to the variables introduced, won't be an a exact copy of the original image.

Now imagine this process repeated with images of every type of tree known to humankind, and the AI has a dataset of latent images linked to the word "tree" which can be combined for an near infinite number of possible variations. But this also leads to an inherent problem with AI images, and highlights one of the major problems with machine image learning.

People often complain it's difficult to get AI to recreate an image with additional subtle edits, because that's not how AI "creates". It always starts the process with noise, and goes from there. And because of so many variables, it will rarely, if ever, arrive at the exact same image twice in a row. This is why, in AI generated video, faces will change from one scene to the next (or even mid-scene!), because the AI only "knows" that each frame should contain, say, a woman of a certain age, with a certain hair style and color, wearing a certain outfit. It doesn't have the ability to maintain consistency across each frame, never mind each scene, because, to it, each frame is a brand new process that starts, as always, with random noise.

So the answer to the question "How is this different from how a human learns?", or how a human processes information, is, to me, fairly obvious. It's very different.

In fact, it's so different, that to even attempt to compare the two, we have reduce the argument to such a degree that we actively ignore everything that makes us human in the first place; Human sees thing=Learning accomplished. AI sees thing=Learning accomplished. Difference=None.

Quote from: cat on Yesterday at 08:08:31Now, if you asked me to make a more realistic drawing, I'd probably do a Google search for pyramids and use the pictures I find there as reference. Is this plagiarism? Most likely, but I dare to say that most people who do graphics have looked up reference pictures before without giving credit. So is this better than AI?

You're being very general with the definition of plagiarism. Plagiarism is not "If I attempt to recreate anything I, as a human being, see, I am a plagiarist" and to suggest otherwise is ridiculous. Plagiarism in art is the taking of an existing piece of art and presenting it, without transformative intent, as you're own; either by copying the art to an exact degree, or simply stealing the original art.

In looking up reference photos, and then drawing your interpretation of those photos, you are creating a transformative work, regardless of how much your drawing closely resembles those original photos. It's why we have things like reference books, libraries of images for artists to use. You are not taking those images, re-uploading them, and saying "Look at these great photos I took!": that's plagiarism. Some libraries of photo reference materials will not only let you use their images to create artworks based on them, but will also let you use the photos themselves (in, for instance, works of photo collage; a transformative work), providing you are not taking those images and presenting them, unchanged, as a competing product of reference. Artistic intent is key.

Quote from: cat on Yesterday at 08:08:31Another example: Imagine an app to look up birds. You take a photo of a bird, upload it, and the app will tell you that the bird is most likely a European robin. This also has to be trained with lots of data of questionable sources. Would you claim here as well that this is all plagiarism and how can people use such a thing? The data is the same, just the output is different.

Would I claim that an app that took copyrighted information, in the form of photos or text, to train an AI to help users determine what kind of bird is in their photo, is stealing? Yes. If the app designers could prove that every single piece of training data was ethically sourced, would that change my opinion? Yes. But...

...we inexorably arrive back at the point I made in my earlier post: "Ethical" generative AI is a redundant concept, because to attempt to create one without access to a Laion-type dataset, would cripple the product to such an extent as to render it useless. AI only works if it has all the data.

Which is why the AI industry has now changed it's tune; it somewhat admits that it stole billions of pieces of art, but now claims that latent images do not fall under copyright, seeing as no part of the original image is actually contained in their datasets, and that AI art is in itself a transformative work and doesn't fall under copyright either. How very convenient...

Also, if you made it this far in a very long post, thank you for your time and attention. :)

And, as always...
Spoiler
Fuck AI
[close]
Steam: LimpingFish
PSN: LFishRoller
XB: TheActualLimpingFish
Spotify: LimpingFish

SMF spam blocked by CleanTalk