r/comics 7d ago

Comics Community (OC) AI 'art' and the future

Could be controversial but I'm just gonna say it... I don't like AI... and for me it was never about it not looking good. There are obviously more factors to this whole thing, like about people losing jobs, about how the whole thing is just stealing, and everything like that but I'm just focusing on one fundamental aspect that I think about a lot... I just wanted to draw what I feel...! 🥲🥲 Sorry about the cringe but I actually live for cringe 💖

49.1k Upvotes

1.2k comments sorted by

View all comments

2.8k

u/illogicalhawk 7d ago edited 7d ago

One fundamental issue with the algorithm showing and reinforcing things you already know and like is that it's limiting. How small would your world and tastes be if you never tried something new, something outside your comfort zone, something that you didn't already know you'd like?

We're all much more diverse and interesting people because we've taken "risks" and experienced new things. Not all of them work for us, but that at least shows you're trying and open to growth.

35

u/itsmemarcot 7d ago edited 7d ago

Counter argument:

"Future AI, make a cartoon series [or whatever] with something I will probably like, but I don't know I like yet. Focus on making me a more mature cartoon enjoyer. Make it challenge my current boundaries."

I side more with the argument that, when you really really love a piece of media, it's like a connection with another mind (or set of minds): the makers of that media. As the comic says: you feel what they felt. You partake in their struggles, their hopes, their joys or sorrows.

Consuming AI media, then, it's like making love with a sex doll. It doesn't matter how realistic it is: it would get old real quick because "there's nobody there".

20

u/Special-Garlic1203 7d ago

Humans as of right connect things oddly. We legitimately don't really get it. It's the irrationally and randomness of creative insight. As of right now, am algorithm likely can never replicate the way a human being will smash together 2 seemingly but not really random things in a way that tickles their fancy, and the way another person when exposed to it will go "huh yeah I dig it"

Could ai? Theoretically but where it's at right now its monkeys on type writers. You would have to spend years fine tuning it before it became worth you time, most will quit.. Algorithms are noticably bad at it. 

I just recently found a random band from a YouTuber I like. It doesn't really sound quite like anything I listen to. It makes me nostalgic for a time I wasn't alive. I love it so much. How would an algorithm ever guess that when I had literally no idea? It's through a shared human connection -- someone I am connected with in some ways tnrlkvn shared ideology / culture  liked it, and that drastically increased the chances I would. 

Part of why older TikTok was so popular was because it used that type of networking system. It recognizes when you behaved similarly to others, and then predicted that would continue. So truly customized AI would be moving away from where algorithms are strong to where there really noticeably weak

Maybe someday. But certainly no time remotely soon. 

-3

u/fuckthesysten 7d ago

back in 2013 computers were already impressing the pros by their mixing and matching: https://www.youtube.com/watch?v=92s_loXgaCM -- spoiler alert, professional art critics can't tell apart computer-made art, for more than a decade

7

u/TwilightVulpine 7d ago

Unless we just make this AI magic mind reading, it will only know the things you already liked to base itself on. It won't know what it is that which you don't know you might like. At best it will recommend things other people with similar profiles liked, but that's not the same as really challenging your tastes.

Algorithms and ads today can guess things that you might be interested in, but they fail at it pretty often.

3

u/illogicalhawk 7d ago

I'd argue that they succeed at it surprisingly often due to the countless and often subtle ways our actions and interests are tracked and shared by various applications and technologies. There's sometimes the thought that we can curate what we feed the algorithm, but with devices potentially listening to our conversations, apps tracking our location and the places we visit, and much of this data being shared between companies, that's often out of our hands.

2

u/TwilightVulpine 7d ago

I wouldn't. As an example, I tried using TikTok which is also mostly algorithmically driven. I had to hammer at it for a while, explicitly searching and following stuff I wanted and telling for it to stop recommending stuff I didn't like, for it to be mildly interesting to browse. At best it's on the level of keeping the TV on channel you like on the background, including how often you just don't care about what it got on schedule. Its manner of use means skipping a lot of what it tries to recommend you. Staying on it was more about staving off boredom than it was a testament to its ability to entertain. It would show some new stuff but I'd have it better simply by following subs of my interests over here. It's telling that Instagram, which got an ungodly amount of data on everyone, is just as mediocre at it.

But worse, whenever it "challenged" me, it was through blatant clickbait and pushing for hatewatching. Because they seek engagement at any cost. Not satisfaction, not enrichment, simply engagement. If it was even moderately effective, I dread to think what sort of infuriating bulshit such a tailored AI would resort to, to keep you stuck to it.

2

u/itsmemarcot 7d ago

But the assumption of the comic is that future generative AI will become a lot better in the future, inuding being able to do so. Which is not certain (as any prediction of the future), but also not implausible at all.

OP, OC and myself are just discussing the implications of this what-if future scenario.

2

u/TwilightVulpine 7d ago

Well if it can be used in such a way, it will still more likely be used to manipulate you rather than give you everything you want and that which you don't know that you want yet. Even now this tech is costly, and must be bankrolled for a reason. They will want returns in some form or another.

But if anything we've seen a lot lately showing that the capabilities of tech are often overstated, and that too is done to manipulate us, to make it seem inevitable and all encompassing when the reality is far more limited and promises fail to materialize. Plenty has been said for and against fantastical AI futures, but not enough about more realistic ones and the veneer of marketing disguising them.

44

u/protestor 7d ago edited 7d ago

Even in your best scenario you are railroaded into liking whatever the AI spits to you. All the "new" things you stumble upon are just things the AI decided you will like.. it's not up to you to decide, your fate has been preordained.

Which is awesome! .. for whoever controls the AI, which will be people like Elon Musk, Mark Zuckerberg, Sam Altman and the likes.

There's already talks about how there should be laws to limit AI.. which in practice means, laws that ensures AI will be always controlled by companies like OpenAI rather than by the common people.

Also: I'm not sure people will actually back down from AI just because it's artificial. People are hooked on their phones right now consuming streams of meaningless media nonstop, and a lot of this media is AI generated to some extent. People don't stop looking down their phones because it's addictive, and AI will only make things more addictive.

The end game is connecting this stuff directly to your mind, with things like Neuralink.. like in Ghost in the Shell

3

u/donaldhobson 7d ago

> for whoever controls the AI, which will be people like Elon Musk, Mark Zuckerberg, Sam Altman and the likes.

Not really. I mean they own the companies that make the AI. But they don't make the 1000's of small day to day decisions. An army of minimum wage workers rate content as good or bad for the AI to classify.

The end user gets a bit of control.

And to some extent, the AI does it's own thing, not controlled by any human.

3

u/donaldhobson 7d ago

> There's already talks about how there should be laws to limit AI.

> which in practice means, laws that ensures AI will be always controlled by companies like OpenAI rather than by the common people.

The proponents of these laws are trying to avoid powerful AI that aren't controlled by any humans at all. Human control over AI is not automatic, and gets harder as the AI gets smarter.

3

u/protestor 7d ago

The proponents of these laws are trying to avoid powerful AI that aren't controlled by any humans at all.

That's what they say. What they are building, in reality, is regulatory capture

1

u/donaldhobson 6d ago

Perhaps a bit of regulatory capture is the lesser of two evils when the alternative might be all humans dying to a rouge superintelligence?

1

u/protestor 6d ago

I understand you are personally worried about this stuff but OpenAI isn't. Sam Altman is personally worried only about making his own net worth grow faster. And he is willing to use the rhetoric of AI safety if it means he can slow down at least some of his competition, but he is not interested in actually hearing AI safety experts and taking into account their input. And that's why Ilya left

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

https://www.centeraipolicy.org/work/openai-safety-teams-departure-is-a-fire-alarm

https://old.reddit.com/r/OpenAI/comments/1cu7lna/openais_longterm_ai_risk_team_has_disbanded/

1

u/donaldhobson 6d ago

> I understand you are personally worried about this stuff but OpenAI isn't. Sam Altman is personally worried only about making his own net worth grow faster. And he is willing to use the rhetoric of AI safety if it means he can slow down at least some of his competition

True. He is a ***

Consider 2 worlds.

World 1) Strict AI regulations. OpenAI gets some exceptions that let them make somewhat bigger AI's. Blatant regulatory capture. A lot of A**holes make a lot of money.

World 2) Every idiot in the world can mess around with AI freely. Someone makes rouge ASI. Everyone dies.

3

u/EsperGri 7d ago

Except, you could just choose to get new media that isn't tailored, and there's no guarantee that the AI will be under a company's control, if people will go against such companies (people might not though...).

One big issue aside from that though is content overload, but it's one that already exists.

4

u/Germane_Corsair 7d ago

How do you decide which new thing to try now? There’s no reason to assume you can’t just use the same method:

3

u/rjrgjj 7d ago

Don’t assume the AI will be given the freedom to show you just about anything you want.

Which is actually the real serious danger of AI. People are always going to be attracted to slop. And yes, being able to overly curate your experience is what’s directly causing the rising tides of bigotry by creating smaller minds.

But someone is on the other side of the AI subtly manipulating what you’re allowed to see. If they know you like hamburgers and hate hotdogs and they want you to hate Black people, expect a lot of innocent looking AI pictures of white people eating hamburgers and Black people eating hot dogs.

2

u/worotan 7d ago

Sounds like you believe the hype about ai, not what it can actually do.

2

u/itsmemarcot 7d ago edited 7d ago

No, I 'm just following the assumption of the comic: "in the future, generative AI is a whole lot better". Which is not certain, but not implausible either.

What OP, OC, and myself are discussing is the implications of this "what if" future scenario (which, again, is plausible).

-6

u/ifandbut 7d ago

But there is someone there...someone who made the AI content in the first place.

AI is a tool, it doesn't self start. The only artists are the ones using the tools, not the tools themselves