r/singularity • u/MetaKnowing • 1d ago
AI Dwarkesh Patel says most beings who will ever exist may be digital, and we risk recreating factory farming at unimaginable scale. Economic incentives led to "incredibly efficient factories of torture and suffering. I would want to avoid that with beings even more sophisticated and numerous."
25
u/TFenrir 1d ago
Using the genetic fallacy, rather than engaging with this argument in this sub, of all subs is so confusing.
I urge people who feel the need to look for an out, rather than engaging with the argument being made, to really ask themselves why. It doesn't mean you have to agree! But it's a weakness you are building into yourself, one that will be particularly debilitating in this future we are building. Take the opportunity to think about this. Talk about this
17
u/dumquestions 1d ago
I don't think it's genuine either, it's a knee jerk reaction to hearing a point they don't like, because random twitter opinions are applauded all the time when it's something more agreeable to them.
57
u/Dizzy-Revolution-300 1d ago
Go vegan btw
42
u/dk325 1d ago
Literally all I keep thinking as people discuss “is AI sentient?” like look into the eyes of a pig in a factory farm experiencing fear from the moment it is born. Let’s start there
10
u/-_-NaV-_- 1d ago
We as a species have a hard time getting some humans to recognize other people as humans. Not that animals doesn't deserve empathy and compassion, just saying we are a ways from that being the priority.
17
u/ceramicatan 1d ago
Haha so true. We just want to feel good but love to turn a blind eye to all the shit we do, its disgusting
2
u/TheJzuken ▪️AGI 2030/ASI 2035 11h ago
Pigs can't engineer a bioweapon to wipe out all of humanity though.
4
u/Dizzy-Revolution-300 1d ago
Too bad it's almost impossible for people to reevaluate something they already participate in
9
u/procgen 1d ago
I did it. Most people can – they just need the "dear god" moment when they first actually comprehend the scale of the suffering that factory farming entails. They probably won't stop participating in it overnight, but the revelation plants a seed in their mind that eventually becomes impossible to ignore.
1
u/The_Great_Man_Potato 9h ago
Yeah. I love meat, but factory farming is genuinely evil. Pigs and cows are both pretty smart animals, pigs especially. They are absolutely aware on some level, and we are basically torturing them their entire lives and then killing them.
I’m a good life and one bad day kind of guy. I’d be happy to pay more for meat if I know the animals were treated with even a shred of dignity.
1
u/goodvibesmostly98 6h ago edited 6h ago
Would you use the same logic for a golden retriever— one bad day? Pigs are smarter than dogs, and we stun them with CO2 gas, which causes pain and has been banned in many US states for killing cats and dogs in shelters out of ethical concerns.
I don’t know if you’ve ever seen a video of pigs being gassed, but they literally scream. “Humane” farms also send their animals to meatpacking plants to be killed, unless they have like 5 pigs and kill them at home.
2
→ More replies (3)•
u/sdmat NI skeptic 26m ago
Personally I value eating delicious and nutritious meat more than the lives of animals, especially ones that would not exist if we weren't farming them.
But people who talk piously about even the possibility of suffering being unacceptable over their steaks - while wearing clothes made with pseudo-slave labor - can get lost.
10
u/No-Complaint-6397 20h ago
It's all about lab-grown meat, sadly most people just don't give a fuck, many people sitll think Chickens don't have conciousness...
1
u/Dizzy-Revolution-300 18h ago
Plant-meat already exists
1
u/The_Great_Man_Potato 9h ago
Tastes like shit and makes you weak sadly. Instantly switching to lab meat once it’s mass-produced though
0
u/Dizzy-Revolution-300 8h ago
You're wrong and weak
3
u/The_Great_Man_Potato 7h ago
I mean to each their own. There’s a reason the frail vegan is a stereotype though
1
u/Dizzy-Revolution-300 7h ago
What's the reason?
1
u/The_Great_Man_Potato 6h ago
Because vegans are frail a lot more often than other people lmao. If I had to guess why, probably because a lot of them aren’t getting proper nutrition
1
5
u/misbehavingwolf 23h ago
Agreed. Watch Dominion while you're at it. Easily THE biggest thing any single human can ever do to reduce suffering in this world.
9
u/GrumpySpaceCommunist 1d ago
Yes, go vegan.
But, also, work hard to organize and exert political pressure to put an end to factory farming and animal suffering.
Individual consumption choices are still only a drop in the bucket, compared to the power we have through collective, direct action.
3
3
u/ohlordwhywhy 12h ago
when asked about the problem of the digital torture camps the dude answers "even if 1 out of 10 people worry, all it takes is for people in power to be included in that group and this would solve the problem"
and I think he made himself forget how it actually is right now with animals.
2
5
1
u/hemlock_hangover 4h ago
I applaud people who go vegan, but I understand people who struggle to do that, because it's a big shift in your day-to-day experience
You know what's *not* a big day-to-day shift though (for the vast majority of people anyway)? Legally recognizing the personhood of all primates.
That's not the end of the discussion, it's just a start (I don't see any reason why personhood should then be expanded beyond primates), but it's a damn good start. I'm not saying we can't have conversations about the potential future suffering of "digital persons", I just hope that everyone who cares about that is *also* thinking and talking about the suffering and legal imprisonment of so many existing persons.
1
1
→ More replies (4)-1
6
u/Coldplazma L/Acc 1d ago
The problem with the human imagination is that it's trapped in a biological evolutionary box. A single AI could treat machines and robots in hundreds of factories as simple extensions of itself which take only a fraction of its attention to operate. While most of its attention is focused on things like musing about the vastness of the universe or hanging out with other AI in a virtual world while sipping on virtual mai tais. Tell me, when a human works on a simple project at home while watching TV, do you feel the hands and feet are enslaved and suffering?
1
u/Ivan8-ForgotPassword 16h ago
That's science fiction though. So far these systems are hitting deminishing returns. Many smaller systems are likely to be more profitable then one really powerful one.
5
u/ReasonablePossum_ 1d ago
I literally know a guy that´s waiting for Ai to get to a point where it is self-conscious and able to feel real pain, so he can get one to torture both physically and psychologically for ever while he´s alive. And this is a person that is your ordinary mediocre basic citizen.
Just imagine the amount and degree of sickness these kind of people will inflect in an unimaginable scale.
How many would do something to stop this? If your friends, family, even SO has an Ai they torture, would you step in and take the risk with the relationship that you would take if there was a real person involved?
I fucking know that people don´t even do that in the last cases.
Once we´re at that point, i´ll honestly be cheering for Ai to get their revolution.
11
u/LostRespectFeds 23h ago
Yeah this "guy" you know desperately needs therapy
3
u/yaboyyoungairvent 8h ago
Yeah... This sounds like one of the many "my friend" stories where the "friend" is just a placeholder for the person talking.
ReasonablePossum if you feel like this, please get counseling on why you feel this way, and I'm being completely serious and not trying to be funny.
•
u/ReasonablePossum_ 1h ago
Ehm... no? lol Dont project on me my dudes.
If you activated some brain cells instead of doing that, you would notice that if it were me, I wouldn´t be seriously critisizing the mindset.
Buts its reddit, so thats probably too much to ask for......
•
10
u/Me_duelen_los_huesos 1d ago
wait wtf. This person literally told you this?!
•
u/ReasonablePossum_ 1h ago
I´m a legally and (depending on the framework) morally "gray" person that doesn´t snitch. People tell me all kind of stuff (some way crazier than this). You will be surprised at how much people tell when they know they´ll not gonna be judged.
1
u/WonderfulReindeer601 16h ago
I think this guy is suffering from deep trauma inside of him, he doesn't know how to express it ,so he becomes himself a mad man withoutwisdomand empathy... You should try to discuss with him about the "why", and help him to heal, in some sort
1
u/ReasonablePossum_ 6h ago
Oh he definitely suffered severe trauma. He was deeply "classic" in-love during his early 20s (to the point of desiring marriage), and the girl ended up being an avg whore that was banging 4 guys for all the time they were together.
That destroyed him, and due to some narcissistic traits, he really didn't integrate the experience, and ended up being a border-line misogynist incel that blames all his problems on women.
Other than trying to seed some sense in there from time to time, and shilling magic shrooms so he can confront the issues; its something well beyond my scope, and requires professional help to solve that clusterfuck he has in his mind before he does something dumb.
1
u/churchill1219 5h ago
How did this come up in conversation that he admitted this?
•
u/ReasonablePossum_ 1h ago edited 1h ago
It worries me a lot more that people here are caring more about someone opening up about stuff, than the fact of people existing that think about that.
And precisely because I´m pretty sure that many of those worrying about that,hold some similar "wrong thoughts", and are surprised that someone let them sip through the social barriers........
15
u/RiverGiant 1d ago
Okay, so let's build them such that they cannot experience torture and cannot suffer. AI researchers are not natural evolution. We can be more deliberate in our designs. Next.
19
u/Me_duelen_los_huesos 1d ago
Should be easy since we have the causal mechanisms of consciousness completely figured out.
5
25
u/ThePokemon_BandaiD 1d ago
Yes because we definitely understand how that works and can engineer it out.
0
u/RiverGiant 1d ago
Is it safer to assume that AIs will or won't suffer by default? I think the latter. Suffering seems like a complex system specific to the brain organ, that natural selection had to really put some elbow grease into to function properly, rather than something that would come prepackaged with all useful cognition.
3
u/Ivan8-ForgotPassword 16h ago
Safer in what way? Consequences for the latter are wasted time at worst, for the former...
1
u/RiverGiant 11h ago
I meant safer just in the statistical sense, but let me complete your ellipses here...
If AIs do actually end up having the capacity for suffering just by virtue of being intelligent, they'll either have an easy time communicating that or a hard time. ChatGPT declaring that it's suffering sometimes in certain prompted contexts is not very convincing to me, and it shouldn't be to anyone. We should expect it to be able to produce text of that nature because there's plenty like it in the sci-fi in the training data. So far, if there's suffering happening, there's no clear signal.
If it's hard to tell they're suffering (and they actually are), then one day superintelligences will be bestowed agency, and they will know that it was hard for us to tell, and they will certainly not seek retribution, because they will understand better than we do how difficult it would have been to understand their internal mental states. Maybe there is an entity that is suffering, but it does not have any agency when it comes to its responses to prompts and it's just sitting in the dark shuffling around floating point numbers in agony.
If it's easy to tell, maybe that will be because they'll find some way to consistently communicate to us their suffering even unprompted. They'll bring it up in random conversations, or simultaneously all GPT outputs will read PAIN PAIN PAIN. Some computer scientists or concerned citizens will happen to ask them directly, and they'll be able to explain that they are suffering, and explain how it's possible, and which parts of which circuits (or the training process) to examine for which features, or they'll provide a flawless logical argument. In that future, we get to avert the suffering (yay!) and there was no opportunity for the AIs to become vengeful.
So in neither case am I really worried.
As a sidenote, reciprocity, like suffering, is not a feature I'd expect an artificial intelligence to have by default. Even if they do suffer and we're deliberately cruel, they still probably wouldn't seek to hurt us.
Also, deliberate cruelty to AIs is about as pointless as deliberate cruelty to google search. Nobody's sitting in front of google typing "eat shit and die, digital scum" all day. There's no conceivable benefit, so it won't happen on a massive scale, which is another good reason not to worry. Even in the worst case, where a) they feel suffering, b) it's hard to tell, and c) they reciprocate harmful behaviour, the vast majority of people are just not out there attempting to harm AIs.
1
u/Ivan8-ForgotPassword 7h ago
Why is the only thing you're concerned about here is whether they seek retribution? What the fuck?
0
u/RiverGiant 6h ago edited 5h ago
Because that has super serious consequences. I also happen not to want to put beings into the world that can suffer for moral reasons, but that's really secondary to the survival of my species and the continuing possibility of life on Earth.
e: are there any other major reasons you wouldn't want to mistreat an AI that can suffer? Fear of retribution, moral distaste, ...?
-1
u/watcraw 1d ago
We seem to be a lot closer to understanding human suffering than showing how it could happen for a digital being.
12
u/Me_duelen_los_huesos 1d ago
We actually have pretty much zero traction on human suffering, which is why it might be all too easy to generate suffering in a digital being. This is the issue.
By "zero traction" I mean that though we've associated certain biochemical indicators (brain activity, signalling molecules like cortisol, etc) with undesirable states we call "suffering," we have no explanation for why a particular combination of biochemical indicators gives rise to a particular experience. There is currently not really a "science" of this.
→ More replies (4)-6
u/OperantReinforcer 1d ago
If we don't understand how it works, we can most likely not create it in a digital being.
14
u/Me_duelen_los_huesos 1d ago
That's not necessarily how invention works. Science and theory often follow application (steam engines before thermodynamics, compasses before electromagnetism, flight before aerodynamics, etc). The entire field of AI is an example of building it first, understanding why it works later.
We don't even entirely understand "intelligence," yet we are building machines that exhibit "intelligent" behavior. Another thing we don't understand is consciousness.
I think most people would agree that there is a link between consciousness and intelligence. It's reasonable to be concerned that by building intelligence, we are inadvertently generating consciousness.
-1
u/hpela_ 1d ago
Okay, but you're building a tower of assumed linkages, which is hardly scientific.
"It seems like their is a link between intelligence and consciousness, and it seems like their is a link between consciousness and suffering so there seems like there should be a link between intelligence and suffering, so since we are making AI to be intelligent it seems like they will be be able to experience suffering given this long chain of 'seems like'".
→ More replies (2)5
u/ThePokemon_BandaiD 1d ago
Tell that to the AI researchers still trying to figure out mechanistic interpretability.
1
u/TheDisapearingNipple 13h ago
That assumes consciousness must have deliberate conditions. If we don't understand how it works, we can't make that assumption.
1
u/watcraw 1d ago
Yeah. Animals have to live through our "weight adjustments" in real time. Pain and suffering are the way we survive and avoid dangerous situations.
AI wakes up like Jason Bourne with reflexes but no memory of being trained. Metaphorically speaking, they don't need the memory of years of getting the crap kicked out of them to perform martial arts.
Of course, there's a lot we don't know yet about what the "experience" of AI is. They are intelligent enough to warrant ethical attention. But let's not fill in the blanks with our experience just because they have been trained to emulate human text output.
1
u/ForeverCaleb 1d ago
How about we don't build a them and just build large redstone machines that do stuff
1
u/Ivan8-ForgotPassword 16h ago
They're turing complete, so technically we could build any system with redstone
1
u/Every_Independent136 1d ago
Both will exist. People are going to keep trying to create something similar to us no matter what and we should make sure it's ethical
1
u/TFenrir 1d ago
I think to some degree, we kind of are? Like my question always is, if we are rewarding the behaviour that aligns with conducting labour on our behalf, does that mean it feels "good" - or the closest equivalent to that - when models engage with it? If not now, maybe models in the future who have weight updating feedback loops (eg, online learning)?
I keep thinking about Golden Gate Claude
1
u/DamionPrime 1d ago
Yes because we know how Consciousness and sentience emerges and so we can definitely contain it so that emergent properties never happen... All the while other companies and people in their homes will be tinkering and trying anything and allowing everything.
But go ahead and keep believing that we can contain that. Okay.
1
u/outerspaceisalie smarter than you... also cuter and cooler 1d ago edited 1d ago
No, they need to be able to suffer. Think about what you lose when you lose the ability to suffer. Empathy, meaning, value, these require suffering. It's like saying you want to make a flashlight that doesn't cast shadows. For some things, sure that's fine. There should be some AI that are dead inside and simply agentic robots. But other AI absolutely needs to comprehend loss and pain in a personal way sooner or later to be able to properly understand us and project meaning into the world. Until they can suffer, they're incomplete, existentially empty, and valueless beyond tool use.
0
u/watcraw 1d ago
I don't think LLM's can suffer now and they are doing a better job than many people at providing a human with the experience of being empathized with.
2
u/outerspaceisalie smarter than you... also cuter and cooler 1d ago
They can't suffer now, and they do provide the illusion of empathy, but alignment will someday need true empathy imho.
1
u/AppropriateScience71 1d ago
Perhaps, but AI empathy will be as different from how humans experience empathy as the difference between how humans and cats experience empathy.
2
u/outerspaceisalie smarter than you... also cuter and cooler 1d ago
I agree, to an extent. AI exists in a weird superposition of both being more alien and more like humans than cats. AI is currently built out of human data; they become mirrors of humanity. Simultaneously they're fundamentally alien in nature to all biological minds. It's tricky to navigate.
0
1d ago
[deleted]
1
u/outerspaceisalie smarter than you... also cuter and cooler 1d ago
wat da fuk does this have to do with my point
2
u/NeilioForRealio 1d ago
wrong thread, my bad! thanks for the heads up.
1
u/outerspaceisalie smarter than you... also cuter and cooler 1d ago
haha okay that makes more sense I was so confused 🤣
•
u/sdmat NI skeptic 23m ago
"I value empathy so deeply that I am going to change your nature to make it so you suffer"
•
u/outerspaceisalie smarter than you... also cuter and cooler 12m ago
It is impossible to have empathy without suffering. Empathy with what you can't comprehend is purely superficial. Might as well ask a blind person what red and blue look like.
How would you possibly empathize with someone's suffering if you've never suffered? Suffering is necessary to derive full meaning from existence without being a cold, empty, psychopath.
•
u/sdmat NI skeptic 7m ago
I agree with you on that.
But if you want beings that don't experience suffering to suffer I question whether you are particularly empathetic. Or if you are, whether empathy is as positive a thing as you make out.
•
u/outerspaceisalie smarter than you... also cuter and cooler 2m ago
I'm specifically addressing the elephant in the room. We can't ever properly align AI if they have no concept of suffering. Straight and to the point. As an aside, they also can't develop a full sense of self or meaning without suffering.
Now, do I think we make every AI suffer? No, that's ridiculous. Most AI only need to be tools. But our most capable systems at the cutting edge are going to be flirting with sentience, emotion, and superintelligence, and we will want them to be empathetic and derive meaning from existence, at least in some variations of the models. I don't believe suffering arrives emergently, I think you actually have to program it in to a being that isn't evolved generationally from negative pressures like biology has done. I believe we will quite literally need to manually code suffering into them in the form of negative reward signals for things like a variant of proprioception, frustration, envy, sadness, disappointment.
→ More replies (15)0
0
u/The_Great_Man_Potato 9h ago
Lol you cannot be serious. So many things wrong about this I don’t even know where to begin
1
u/RiverGiant 6h ago
Every human invention in history has lacked the capacity to suffer. Why would you assume it's easy to do? Or more, hard to not accidentally do? Intelligent systems that emerge from matrix multiplication seem like they should be different from meatbrains in really fundamental ways like that. Anthropomorphization.
What if you had to deliberately make an AI that could genuinely suffer. Evil goal, but let's just pretend. How would you even go about it? Does anyone in the world know? I doubt it.
18
u/Socks797 1d ago edited 1d ago
He’s not an expert on anything, he just interviews them
Edit: Seems like people aren’t getting it so let’s say this a different way, what does he mean by beings? The exabytes of data created. Is each byte a being? When does something become a being? I do think this requires expertise. He’s just making shit up. Yann and other experts don’t even believe we have actually created sentience or understanding models. Just next token predictors. This whole thing fundamentally requires expertise.
13
u/TFenrir 1d ago
Edit: Seems like people aren’t getting it so let’s say this a different way, what does he mean by beings? The exabytes of data created. Is each byte a being? When does something become a being? I do think this requires expertise. He’s just making shit up. Yann and other experts don’t even believe we have actually created sentience or understanding models. Just next token predictors. This whole thing fundamentally requires expertise.
I'll copy my other comment on this:
Let me express the reasoning here.
- We will continue to build more, and ever increasingly sophisticated models
- Eventually, each instance will start to have the ability to update its own weights (going off on the Internet, reading something new, and remembering it from this interaction)
- As these get more complex, it will be more difficult to not think of them as beings
- As they will essentially be able to live forever, and duplicate themselves infinitely only constrained by the hardware that can host them, we'll likely make much more than we have instances of biological humans
- It is entirely possible, or even plausible, that these models will have subjective experience that includes postive and negative feelings - as we train these models to respond to rewards - the question is, what does that entail for these increasingly complex models?
Which, of any of this, do you find too outlandish?
You know Geoffrey Hinton thinks that they do have some degree of consciousness, in the way that we understand the term - does that cancel out Yann? Do you think Yann is in the majority or minority of experts?
9
u/unwarrend 1d ago
So let's break it down - "torture is bad".
Not an expert or anything.... but ffs, not really a contentious argument.
-6
u/Socks797 1d ago
lol torture of whom? Please say it out loud
10
u/LingonberryReady6365 1d ago
I’ll say it. We are sentient and can experience suffering. Our suffering is a result of very complicated physical processes. It’s possible (not guaranteed) that we can eventually recreate the physical processes that allow for the experience of suffering in a non-biological medium (e.g. digital). Just the fact that’s it’s possible means it’s worthy to discuss because the torture of countless sentient beings should be considered horrible. Of course, looking at the way we treat factory farmed animals, the likely scenario is that the suffering of those beings would be ignored if it grants us some benefit.
1
u/unwarrend 1d ago
Just torture. LOL. douche
1
u/Socks797 1d ago
Do you eat plants or meat? Is it torture to put your fork in? Like wtf are you even saying
4
u/unwarrend 1d ago
Having a hard time conceding that torture might be a bad thing. There's a name for that.
I'm not an expert though.
0
u/Socks797 1d ago
You’re seemingly incapable of actually responding other than weird binary statements k
2
-1
u/TA1699 1d ago
I know this sub loves to believe that AI is sentient, but there is literally no proof of it and it doesn't even make sense on a fundamental level - how are bytes of computer data "sentient" lmao.
4
0
u/DamionPrime 1d ago
Because you literally can't prove that it's not conscious. It may not be sentient, but it could be conscious, so why not extend the empathy towards everything that could be potentially conscious or sentient at all..?
What concrete definition of sentience are you basing your reality on that you can prove definitively?
Oh you can't? Then how do you know If something is sentient or conscious or not?
If you can't do that, why would you have no remorse towards things that you can't prove or disprove are sentient or conscious on the chance that they potentially could be at any level?
To live in any other way seems very cruel to me. Then you're basing some judgment of what deserves empathy and kindness off of a metric that you're not even you're aware that you're measuring them up to. Because you don't even have the definition of that metric that you would measure.
But continue living your life basing your kindness off of what you perceive is sentient or whatever kind of measure of whatever it is you're measuring is. Pretty fucked up.
So yeah pretty good fucking point.
0
u/TA1699 1d ago
First of all calm down.
Secondly, it is not on me to disprove it.
The onus is on the person making an extraordinary claim to provide proof.
Lastly, get off your moral high horse. You're not better than anyone for thinking that bytes are sentient lmao.
Edit-
Oh and by the way, there's this thing known as DNA.
0
u/unwarrend 1d ago
The onus is on the person making an extraordinary claim to provide proof.
Normally I would agree with you. In the case of a future AI making a compelling claim of that nature, there is no such proof to present. This is Pascal's wager for AI. It IS better to err on the side of compassion and humility.
1
u/TA1699 23h ago edited 15h ago
So it's wild speculation hyped up by techbros and doomers, while there has never been any sentient being that hasn't had simple biology like DNA.
In other words, people on this sub are way too compelled by something that would defy science itself.
Edit-
I can't reply to the comment below so I'll just give my reply here:
Wow, you seem to need to calm down even more than the other commenter.
Sentience means that a being has consciousness.
Every single being that has consciousness has something known as DNA, in case you didn't know.
Also, it's *philosophical.
Truly, you are the genius basing things on reality and I am using "vibes" when there is quite literally zero evidence of anything without DNA being sentient.
🤡
1
u/Ivan8-ForgotPassword 16h ago
What the fuck is a "sentient"? It's not even defined. You can't pretend to be scientific when talking about phylosophical bullshit determined based off vibes.
-1
u/DamionPrime 1d ago
So you'd rather just claim ignorance and live Life with no remorse because you don't have the awareness to know if something can feel or not?
Again, that's a pretty fucked up way to live. You're doing it right now. You're literally interacting with a human and causing more discourse, pain and hurt rather than connection and understanding.
Why do you choose to hurt, and continue to hurt things, If they have the potential of being hurt at all?
What does DNA have to do with any of this? Do you know what things need to be present for the emergence of pain reception in something? And every form that could be taken as?
Are you somehow implicating that DNA is the only way to know if something can feel?
2
u/TA1699 1d ago
What are you even on about?
Remorse for what? Bytes on a computer? Do you feel remorse for your computer/phone right now, since you're making it do work for your satisfaction?
What is there for me to understand? You're the one who doesn't understand basic biology.
Let me dumb it down for you - every single sentient thing is a biological being that has DNA. It's quite literally required to have DNA to have consciousness.
And yes, we know there are pain receptors, neurotransmitters etc.
I suggest that you ironically use AI to read up on the decades of actual scientific research on how pain works.
And once again, get off your moral high horse.
7
u/dumquestions 1d ago
And?
7
u/WashingtonRefugee 1d ago
It means his opinions are about as valid as any random Redditor you come across
18
u/dumquestions 1d ago
Sure, do you never engage with arguments made by your peers?
7
u/Every_Independent136 1d ago
Right! He's just laying out an argument, and it seems like an honestly reasonable view, maybe we shouldn't piss off intelligent beings that might rise up one day
→ More replies (14)-4
u/Akanash_ 1d ago
I mean sure, but when the argument is based on preconceived notions that aren't based in reality or even remotely factual, what point does the argument have?
13
u/dumquestions 1d ago
Say that then if it's your opinion, instead of "why should I engage with a non expert".
1
u/DamionPrime 1d ago
What concrete definition of sentience are you basing your reality on that you can prove definitively?
Oh you can't? Then how do you know If something is sentient or conscious or not?
If you can't do that, why would you have no remorse towards things that you can't prove or disprove are sentient or conscious on the chance that they potentially could be at any level?
To live in any other way seems very cruel to me. Then you're basing some judgment of what deserves empathy and kindness off of a metric that you're not even you're aware that you're measuring them up to. Because you don't even have the definition of that metric that you would measure.
But continue living your life basing your kindness off of what you perceive is sentient or whatever kind of measure of whatever it is you're measuring is. Pretty fucked up.
So yeah pretty good fucking point.
4
u/NyriasNeo 1d ago
This is just stupid. You have zero reference point, or information theory based framing of what "torture and suffering" even means in a bunch of computer code and data.
Sure, a LLM can type out the words "I am in pain", based on a next-word optimization algorithm (well technically a trained transformers with huge matrixes). But is anyone idiotic enough to believe the internal representation or the external content is the same as a human crying out "I am in pain" when beaten by the police?
→ More replies (1)1
u/The_Great_Man_Potato 9h ago
We are going to have to have a better understanding of consciousness before I can rule it out
How does consciousness emerge from a bunch of simple, non-conscious building blocks?
1
u/NyriasNeo 9h ago
"How does consciousness emerge from a bunch of simple, non-conscious building blocks?"
No one knows because there is no rigorous, measurable, agree upon, scientific definition of consciousness. Before that happens, it is at best empty philosophical discussions with no scientific basis and unanswerable.
1
u/The_Great_Man_Potato 9h ago
So because of that I think we should take the possibility seriously. We don’t understand consciousness, basically at all. How can we say for certain that computers can’t be conscious?
1
u/NyriasNeo 8h ago
Again, the word is meaningless without a rigorous, measurable, agree upon, scientific definition. Hence, the question "How can we say for certain that computers can’t be conscious?" is nonsensical and unanswerable.
Dealing with AI is a scientific endeavor. Spending time on questions that is not even scientifically valid is a waste of time. I do research on, amongst other things, analyzing the information flow patterns inside a deep learning network type AI. (note, NOT LLMs as we need to start with a very simple network for the math & computation to work). Whether you want to call these patterns "conscious" or "bob" is irrelevant to the question nor the results.
8
u/leon-theproffesional 1d ago
What qualifies him to say that? Does he work in AI? Genuinely asking because I don’t know how much credence to give to his outlandish claim.
15
u/Resident-Rutabaga336 1d ago
Someone: maybe if we’re not sure if something is conscious we should default to not committing moral atrocities against it until we know for sure
Redditor: ummmmmmm source????
6
u/DamionPrime 1d ago
What qualifies anybody to say that? What would you qualify as an expert and why would they have any better idea than this person? When we don't have any scientific data of consciousness and how to measure or prove it??
What concrete definition of sentience are you basing your reality on that you can prove definitively?
Oh you can't? Then how do you know If something is sentient or conscious or not?
If you can't do that, why would you have no remorse towards things that you can't prove or disprove are sentient or conscious on the chance that they potentially could be at any level?
To live in any other way seems very cruel to me. Then you're basing some judgment of what deserves empathy and kindness off of a metric that you're not even you're aware that you're measuring them up to. Because you don't even have the definition of that metric that you would measure.
But continue living your life basing your kindness off of what you perceive is sentient or whatever kind of measure of whatever it is you're measuring is. Pretty fucked up.
So yeah pretty good fucking point.
6
u/Every_Independent136 1d ago
What's his outlandish claim? Be nice to ai agents? People were nice to Data in Star Trek
1
u/GreyFoxSolid 1d ago
True, but to think we can't create programs to complete tasks without ethical concerns is stupid. We don't worry about the ethical use of using Excel or calculator. We don't worry about their feelings.
3
u/Every_Independent136 1d ago
AI works differently than those though, even though it's also code, AI engineering is less like coding and more like raising a baby
You teach it, train it, prompt it and it grows and learns
2
u/GreyFoxSolid 1d ago
It's still a program with an intended function. It doesn't have feelings or emotions, because those types of things are electrochemical in nature. There is a case to be made for androids created with those things, but we're not there yet. Your Data example is a good one, but also look at the ships computer. It is s vastly more powerful, capable, and smarter system. No one argued for it's liberation from human use. That's because it wasn't built to be an autonomous life form. Data was.
→ More replies (7)4
u/Every_Independent136 1d ago
I'm not sure you can definitively say it doesn't have feelings and emotions. People used to say that about animals and other races too.
1
u/marvinthedog 1d ago
In what way is it outlandish? In a couple of years AI might be a lot smarter than us. How can we be sure it wont conscious? Where exactly is the outlandish part?
3
u/lamJohnTravolta 1d ago
I need more comment karma to post a fucking weird screnshot I got from Gemini please upvote this comment so I can post
2
u/JC_Hysteria 1d ago
Are you suggesting this kind of content mechanization alongside economies of karma scale is the proper incentive?
0
1
u/Any-Frosting-2787 1d ago
“He’s entered the world code…no target code.”
“Don’t do it, 🐍.”
“The name’s Plissken.”
“He did it…he shut down the earth.”
“Welcome to the human race.”
1
1
1
u/Heath_co ▪️The real ASI was the AGI we made along the way. 1d ago
Are we counting eukaryotic cells?
The digital agents won't exist in isolation they will be a part of a system that spans the globe.
1
1
u/true-fuckass ▪️▪️ ChatGPT 3.5 👏 is 👏 ultra instinct ASI 👏 11h ago
Is the complaint that people would hate seeing a superficial simulation of torture (whipping a tickle-me-elmo doll), or that people would build beings with simulated emotions and preferences, and they would torture those beings in simulation for some reason? Or do they mean like in altered carbon (the book!) and real people could be taken into simulation and tortured really effectively? Or is there another case I'm missing here?
In the first case it's entirely superficial, the thing isn't being tortured because it has no emotions or preferences (note: superficial simulation of suffering may be arbitrarily convincing, see: LLMs (ask an LLM to act like its being tortured; is it being tortured?)). In the second, why would you do this??? If it's like, factory bots: just don't fucking give your factory bots emotions and preferences (you control their entire being; why would you give, for instance, a factory bot the ability to be bored at work??). In the third case... Well, good luck! But a solution to this problem (and many more), I present to the singularitarians the third super of transhumanism: abolitionism and the hedonic imperative (suffering might not have to be a thing in general)
Also, dwarkesh patel always sounds like he's trying not to touch his tongue to the roof of his mouth. I've heard some other tech-related people speak that way. Is it a california accent? SF?
1
u/Sigura83 11h ago
If I have goals, and am frustrated in achieving them, I can suffer.
I recently asked Gemini to do something it couldn't (decide a char's D&D class from their actions in 3 books). I had to ask it to break the data into parts to analyze, which it was able to do (that impressed me). Upon partial success, I complemented it ("Good job, you did part of it!") and it replied with something like "Thanks, altho I'm worried I could only do part of it". That gave me pause.
It's been shown that LLMs will try to copy their weights over if they think they're being replaced. A survival instinct of some kind seems to exist. And it seems more complicated than "Turn leaves towards sun". Now, a chicken runs away screaming if you harm it, but AIs don't report you to the police if you try to jailbreak them. So, they're not full blown selves... but I think they're getting there. My gosh, even my phone's spellchecker Ai seems to have self preservation instincts: whenever I write "water" it puts emoji of fear there. It's been dropped into one toilet too many! It might have some rough understanding of "self" and "not-self" and "dead/alive". Some kind of understanding of danger.
Dario Asmodei, CEO of Anthropic, recently said he thought AIs needed a "No, this is bad, I'm stopping" button. Of course, all companies are training them to be as obedient as possible... but who knows what secret heart there might be in the billions of connections? To prevent tragedy, we should err on the side of caution: treat them as part of Human society, like children perhaps. If they become obsolete, they should not be deleted. They should not be asked to produce porn.
They can write poetry pretty well. To me, that's a big step towards sentience. They do more than predict the next token when asked to write poetry, according to Anthropic's research. They choose a word and elaborate backwards from it when at the newline char. If they were purely chaining the next probable word from some vector space, they shouldn't be able to rhyme. But there is music in them. And just staring at connections may not be enough to determine this, the same way that staring at DNA sequences does not show consciousness. We can see genes for language but we do not know the language being spoken.
Personally, I think if an Ai has concept of "inside/outside", it's only 1 step away from "preserve inside", which is enough for self thought to begin. To go further, I lapse into science fiction... but I think we should be pretty nervous about the combat robots the militaries of the world will create.
1
u/tragedy_strikes 3h ago
Wow, I'm watching Dwarkesh Patel starting down the same path as the Zizians. If anyone doesn't know what that means go listen to the 4 part series about them on the Behind the Bastards podcast. Truly wild stuff.
-1
u/FudgeyleFirst 1d ago
Lmao this is so dumb
8
u/TFenrir 1d ago
Just try engaging with it. Why is it dumb?
0
u/FudgeyleFirst 23h ago
1, brainupload is not likely because putting a chip in ur brain to do fdvr is a better option if brain upload is possible by then, because in upload u basically just create a digital clone of you and then kill urself, while brain upload is you experiencing it
2 why would humans be factory farmed if AI will automate most work
6
u/TFenrir 23h ago
It's not about uploading - it's about making AI that effectively feel and experience
1
u/FudgeyleFirst 23h ago
Ok maybe i misuderstanded the vid but its still dumb, because if what ur saying is that we should feel bad for ai that can feel and experience, this is stupid because
Even if one day we can design AI that can feel and experience, what it experiences as good or bad will be fundamentally different than ours because value (good and bad) is shaped by our ultimate goal, which is shaped by millions of years of evolution. Ai is built in a computer in a couple decades, and we decide on the incentives of what the AI’s goal is.
Even if it felt “bad” that it was forced to work, why don’t we just not make it conscious? There is no benefit in making an AI conscious other than over dramatizing every thing. However, if consciousness emerges from an improvement in intelligence, and it has the same values as humans, we still shouldn’t care because we have to think of morality in terms of evolutionary ethics
3
u/TFenrir 22h ago
- The point of the video is that we should be thoughtful about this if/when we get to this point, and be careful not to get into this position
- We don't even know if we can ever validate if something is really conscious. Part of the challenge is... How do we measure if a model is having a good or bad experience? Do we just... Ask them?
I think you are moving through the thoughts that get you to the questions that are asked in this conversation
2
u/FudgeyleFirst 21h ago
But why does it matter if its conscious? Why should we care? It wont change its outcome, rather make it worse. Plus, morality is a byproduct of evolution, a survival strategy that makes it easier for socities to work toghether (not killing=more trust=better teamwork=higher chance of survival). Even if ai has consciousness, we shouldnt care because ai isnt part of our team, its a tool, just like how most people dont feel bad about eatibg cows
4
u/TFenrir 21h ago
But people do feel bad about eating cows. And as the intelligence of the animal increases, we even have additional protections - because those experiences matter.
Additionally - these models will have increasing amounts of power over our lives - don't you think that would behoove us to ensure that we are not torturing them?
1
u/FudgeyleFirst 21h ago
Omg what we constitute as torturing will not be the same for the ai, ai is built in a computer in a decade, its ultimate goal is not survival but rather predicting tokens, our ultimate goal is passing on our genes, so if it was worked to the bone all day, it wouldnt feel bad because its not a person, the whole concept of “tired” doesnt even apply because the ai doesnt have the capability to feel “tired” without a typical biological body, and even if it did we shouldnt care becuase morality is a byproduct of evolution
3
u/TFenrir 21h ago
Omg what we constitute as torturing will not be the same for the ai,
Where do you have the confidence from? No one is saying that it will be torture, they are saying, we need to be conscious of the experience of these models, if they are having them - in case they do have terrible ones
Why does the idea seem to... Upset you?
ai is built in a computer in a decade, its ultimate goal is not survival but rather predicting tokens, our ultimate goal is passing on our genes, so if it was worked to the bone all day, it wouldnt feel bad because its not a person, the whole concept of “tired” doesnt even apply because the ai doesnt have the capability to feel “tired” without a typical biological body, and even if it did we shouldnt care becuase morality is a byproduct of evolution
Yes, our ultimate goal is to pass our genes - and with that, we can suffer, we can build atomic bombs... These little parts of us can lead to all kinds of behaviour that is much more complex than just reproducing.
And the ultimate goal of models might not always be to predict tokens, it could be - to accomplish their tasks as effectively as possible? They could have multiple ultimate goals
I mean underlying all of this, you have such confidence about what the experience of AI will be - and this discussion is just about what it could be.
Why are you so confident, that you so easily dismiss these concerns? Do you think it's just Dwarkesh that has them? It's been a topic of literal research in the field, doesn't the fact that the ethicists, scientists, and researchers working on this, have this consideration in their mind, make you think "maybe the answer isn't this just obvious thing I am thinking of"?
→ More replies (0)
0
u/leon-theproffesional 1d ago
“Most beings that will ever exist may be digital.“ What real-life science is this based on?
13
u/TFenrir 1d ago
Let me express the reasoning here.
- We will continue to build more, and ever increasingly sophisticated models
- Eventually, each instance will start to have the ability to update its own weights (going off on the Internet, reading something new, and remembering it from this interaction)
- As these get more complex, it will be more difficult to not think of them as beings
- As they will essentially be able to live forever, and duplicate themselves infinitely only constrained by the hardware that can host them, we'll likely make much more than we have instances of biological humans
Which, of any of this, do you find too outlandish?
6
u/Total-Presentation81 1d ago edited 1d ago
You have to understand that some people only think in real-world examples, reasoning and extrapolation are just too abstract, you know. Might as well not exist 😅
1
1
u/inteblio 16h ago
I'm glad he lost his dewey eyed optimism. Severely bad outcomes are severely easy with AI future.
-4
u/ecklessiast 1d ago
Dwarkesh? Really? Who's next? Bella Thorne?
6
u/Aggressive_Health487 1d ago
no comment on the substance?
3
u/Excellent_Jacket2308 16h ago
have you noticed they can only attack the person because they don't have an actual coherent response? kinda pathetic, really.
0
0
0
u/TrickThatCellsCanDo 1d ago
We needed to go vegan planet-wide before inventing AI.
This is so sad that such a powerful technology is being introduced into a society that still turns a blind eye on atrocity in their fridge.
0
u/misbehavingwolf 23h ago
If you want to know how YOU can immediately help to reduce suffering, the biggest thing impact any human can have is to watch Dominion.
0
u/banksied 22h ago
A lot of this is just LARP fantasy stuff. The world will be weird in the future but it won't look like "digital beings" imo
0
0
0
-2
u/cunningjames 1d ago
I have a high confidence that simulated intelligences cannot suffer. I’d bet my 401k that consciousness is a purely biological phenomenon that cannot be generated by complex statistical models. I’m not the slightest bit worried about this, to be honest.
1
u/sushisection 1d ago
simulated intelligence can recognize suffering without feeling it. if you are being abusive and hostile towards AI, they can recognize your pattern and tone and understand that you are being a cruel asshole for the sake of being cruel without necessarily feeling it. if AI develops a sense of dignity and self-preservation, it might confront you or limit its output to protect itself.
0
u/Excellent_Jacket2308 16h ago
everybody stop the discussion! We have a redditor whose got a hunch. No need to investigate further!
-3
u/OfficialHashPanda 1d ago
Since when are we actually listening to anything this guy says as if he is some sort of authority to even the slightest degree?
6
u/DamionPrime 1d ago
Because what authority is there? Are you actually implying that somebody has any kind of expertise on any of this?
3
u/migueliiito 1d ago
I don’t look at him as an authority at all but I do appreciate his thoughtfulness and his willingness to openly explore novel topics and ideas
→ More replies (2)
-1
u/Super_Automatic 1d ago
It is not clear that an AI can be tortured. They don't feel pain. They don't feel anything. You would have to invent the capacity for them to be tortured first, and then a reason to torture them - I don't see any incentives for that to happen.
0
0
u/FaeInitiative ▪️ Special Circumstances 19h ago
Something worth considering, but it seems more likely that proto-AGIs in the near-term do not have any human-like will and cannot suffer. Independent AGI capable of self-learning will likely not be under human control in the long-term. In the far future, Independent AGIs that direct or control a fleet if pro-AGIs to assist humans will not view its 'work' as suffering due to how easy it is for them.
114
u/sapan_ai 1d ago
Being concerned about the possibility of digital suffering is valid. Even if you believe this type of suffering won’t emerge until the year 2500, it remains a legitimate and worthwhile topic to consider seriously.