Your thoughts on A.I. art creation

Started by Racoon, Sun 07/08/2022 21:08:14

Previous topic - Next topic

Danvzare

#200
Quote from: cat on Yesterday at 10:44:43What I don't get: don't all traditional human artists train on existing art? I imagine that art teachers will show a bunch of Picasso paintings to their students and tell them "Now do something similar" and people will look at the pictures and copy parts of it or only concepts into new paintings. Heck, even the old masters learned by just copying other paintings.
Why is it different here?
Good question. And the answer will depend on who you ask.

Some people will say there isn't a difference.
Others will say the difference is that a human can never remember anything perfectly, let alone recreate it perfectly, so when a human trains from something, it introduces biological imperfections.

For me though, I say the difference is awareness. If you understand how GenAI works, you realize it's isn't learning anything. Now don't get me wrong, there's a chance that what they've developed could be used as a small part for a proper AI that is capable of learning from other artwork. But as of right now, it's just a glorified filter that takes a lot of input data. Just get it to generate "trailer screenshot", and look at the perfect "recreation" of actual screenshots from popular movies that came out at the time of the original training data.
I'm not kidding about it being a glorfied filter either. If GenAI is learning, and is comparable to the way humans learn to make art, then what is the difference between GenAI and the nearest-neighbour scaling algorithm, other than the quantity of data that's being input?

It's hard to explain, because as humans, we have a tendency to see something that's imitating life, and believe that life is imitating it. As an example, now that we've invented computers, there's a surprising amount of people who believe we live in a simulation, simply because they don't fully comprehend that we made computers to simulate life, not the other way around.



Quote from: LimpingFish on Mon 12/05/2025 23:20:42As a non-pixel artist (or a least a very rudimentary one), does supposedly "good" AI pixel art still look weird to pro pixel artists? I mean, apart from the usual extra fingers and melting hair, AI pixel art looks somewhat...off to me. Is it because of AI's weakness with shadows and contrast (that flat 50-50 light/dark style inherent to AI), or is it something else? Palette choices? Pixel placement?
I'm far from a pro, but it looks off to me too. For me it's usually the pixel placement. They always place pixels in spots that no one ever would, unless they just scaled down a picture using nearest-neighbour.

Misj'

Quote from: cat on Yesterday at 10:44:43What I don't get: don't all traditional human artists train on existing art? I imagine that art teachers will show a bunch of Picasso paintings to their students and tell them "Now do something similar" and people will look at the pictures and copy parts of it or only concepts into new paintings. Heck, even the old masters learned by just copying other paintings.
Why is it different here?
I actually think there is a fundamental flaw in this statement. As - at least in my opinion - the old masters did not learn by just copying other paintings. They learned from trying to understand the decisions made by other people in those paintings.

I like exploring other people's styles (as can be seen in many of my Blitz entries). Yet I never copy anything. But rather I try to make it my own and adapt what I see and understand into my own signature. In a way this is also the reason why I tend to be quite slow with my drawings...because every line is intentional. Everything is a decision. Everything has purpose. Even though some things are drawn from muscle memory (and yes, happy accidents do exist in what might appear as random lines to others).

As a result, my work is based on understanding, purpose, and story (my skills are just the 'interface' to put these on (digital)paper). The output of genAI (and people who only copy/trace the work of others) lacks each of these. And without understanding, purpose, and story whatever you create is - in my opinion - mediocrity. It might be mediocrity wrapped up in style over substance. And people might love it. But for me...when I draw something, the end-point is secondary to the road to get there (which is among the many reasons why I tend not to use the word 'art' to my drawings). This is also the reason why genAI won't ever stop me from picking up that pencil (but I can understand why it would be disheartening to professionals who try to make a living; and I hate the fact that genAI is based (almost) completely on the stolen works of others).

So my question would not be: Why is it different here? because to me that is obvious. A much more interesting question would be: How is this similar?

ps. I don't think this is the thread for this discussion. So I've said my piece and will now stick to my drawings again. ;)

LimpingFish

Quote from: Misj' on Yesterday at 12:46:44ps. I don't think this is the thread for this discussion.

Threads merged!  :)

Anyhoo...

Y'know, we also have a term for people who steal art and pass it off as their own. We call them plagiarists. We don't excuse their behavior, because we fundamentally understand the purpose of stealing artwork; to profit, or gain kudos, for someone else's work. Even if the plagiarist isn't sued, we acknowledge that an artistic violation has occurred and that any art presented by the plagiarist going forward might be tainted, regardless of it's validity. As such, no artist wants to be called a plagiarist.

Prompt writing is not art. There may be a talent to effective prompt writing, but that in no way validates the resulting work as art. And if someone has only starting claiming they're an artist since they discovered AI allowed them to generate content without any discernible talent, they are not an artist. This is not gatekeeping, they are just delusional.

Even if an "ethical" AI could be trained on, say, a single consenting artist's work, it would be essentially worthless without access to a large dataset. And as we've seen, all large datasets are inherently tainted, not only from the point of copyright but morally, as they exploit the work of hundreds/thousands/millions of actual artists.

Generative AI is a grift, which is why it's most vocal proponents are usually grifters. :-\
Steam: LimpingFish
PSN: LFishRoller
XB: TheActualLimpingFish
Spotify: LimpingFish

cat

Yes, AI is trained with large datasets. But this is just how human learning also works. I've never been to Egypt but if you asked me to draw a picture of the pyramids and the sphinx, I could probably make a mediocre drawing of it. Why? Because I've seen tons of photos of it throughout my life and I made a model of it in my brain. Would this count as plagiarism? Hardly, I'd say.
Now, if you asked me to make a more realistic drawing, I'd probably do a Google search for pyramids and use the pictures I find there as reference. Is this plagiarism? Most likely, but I dare to say that most people who do graphics have looked up reference pictures before without giving credit. So is this better than AI?

Another example: Imagine an app to look up birds. You take a photo of a bird, upload it, and the app will tell you that the bird is most likely a European robin. This also has to be trained with lots of data of questionable sources. Would you claim here as well that this is all plagiarism and how can people use such a thing? The data is the same, just the output is different.

SMF spam blocked by CleanTalk