r/slatestarcodex • u/hn-mc • 1d ago
AI Is any non-wild scenario about AI plausible?
A friend of mine is a very smart guy. He's also a software developer, so I think he's relatively well informed about technology. We often discuss all sorts of things. However one thing that's interesting is that he doesn't seem to think that we're on a brink of anything revolutionary. He mostly thinks of AI in terms of it being a tool, automation of production, etc... Generally he thinks of it as something that we'll gradually develop, it will be a tool we'll use to improve productivity, and that's it pretty much. He is not sure if we'll ever develop true superintelligence, and even for AGI, he thinks perhaps we'll have to wait quite a bit before we have something like that. Probably more than a decade.
I have much shorter timeline than he does.
But I'm wondering in general, are there any non wild scenarios that are plausible?
Could it be that the AI will remain "just a tool" for a foreseeable future?
Could it be that we never develop superintelligence or transformative AI?
Is there a scenario in which AI peaks and plateaus before reaching superintelligence, and stays at some high, but non-transformative level for many decades, or centuries?
Is any of such business-as-usual scenarios plausible?
Business-as-usual would mean pretty much that life continues unaltered, like we become more productive and stuff, perhaps people work a little less, but we still have to go to work, our jobs aren't taken by AI, there's no significant boosts in longevity, people keep living as usual, just with a bit better technology?
To me it doesn't seem plausible, but I'm wondering if I'm perhaps too much under the influence of futuristic writings on the internet. Perhaps my friend is more grounded in reality? Am I too much of a dreamer, or is he uninformed and perhaps overconfident in his assessment that there won't be radical changes?
BTW, just to clarify, so that I don't misrepresent what he's saying:
He's not saying there won't be changes at all. He assumes perhaps one day, a lot of people will indeed lose their jobs, and/or we'll not need to work. But he thinks:
1) such a time won't come too soon.
2) the situation would sort itself in a way, it would be a good outcome, like some natural evolution... UBI would be implemented, there wouldn't be mass poverty due to people losing jobs, etc...
3) even if everyone stops working, the impact of AI powered economy would remain pretty much in the sector of economy and production... he doesn't foresee AI unlocking some deep secrets of the Universe, reaching superhuman levels, starting colonizing galaxy or anything of that sort.
4) He also doesn't worry about existential risks due to AI, he thinks such a scenario is very unlikely.
5) He also seriously doubts that there will ever be digital people, mind uploads, or that AI can be conscious. Actually he does allow the possibility of a conscious AI in the future, but he thinks it would need to be radically different from current models - this is where I to some extent agree with him, but I think he doesn't believe in substrate independence, and thinks that AIs internal architecture would need to match that of human brain for it to become conscious. He thinks biochemical properties of the human brain might be important for consciousness.
So once again, am I too much of a dreamer, or is he too conservative in his estimates?
21
u/dredgedskeleton 1d ago
I work as a technical UX writer in the enterprise engineering org of a FAANG company. my entire role is to help design and document use cases for different AI tooling to increase engineering output.
we work so hard trying to shoehorn AI into org processes. there's so many costs associated with it too. at the end of the day, we do make AI tooling that def scales output and could mean fewer engineers working on a given project. possibly leading to fewer headcount down the road.
but, right now, AI is just a framework to build into workflows. it still needs so much design thinking when trying to create measurable impact.
15
u/Additional_Olive3318 1d ago
I agree with everything your friend says.
There’s a massive leap from where we are to a super intelligence that can colonise stars thoroughly transform the economy and society.
•
u/eric2332 6h ago
I actually think it's a small step from AGI to colonizing the universe. Once you have AGI, intelligent workers (robots) can be built at an exponential rate. And it is possible that such workers will function much better off earth than humans do (they won't need air, food etc although they might need significant radiation hardening).
22
u/km3r 1d ago
People forget we are hitting limits. Moore's law is dead. AI hardware will soon hit the limit and see minimal gains. And we already seeing deminishing returns on LLMs. Sure new things could continue to provide leaps, but I wouldn't assume an infinite amount exist.
Ask any person shortly after the moon landing where they thought America would be in 50 years. Tech isn't infinite. You hit physics limitations eventually.
But, being non infinite doesn't mean we won't hit super intelligence. But that's still years and many jump forwards away.
•
u/brotherwhenwerethou 15h ago
Moore's law is dead. AI hardware will soon hit the limit and see minimal gains.
Moore's law has been dead for a while now; most people didn't notice, because it turns out there was lots of room left in hardware specialization. Phones, for instance, are not just personal computers but smaller, they have heavily simplified architectures.
There's still lots of room left. GPUs are not fully specialized for machine learning workloads of any sort, and existing AI accelerators are not fully specialized for LLMs.
•
u/eric2332 6h ago
Moore's law was still perfectly alive as of 2020 although my impression is that the pace of growth has slowed since then.
However, some other measures of semiconductor improvement, such as maximum clock speed, have indeed stalled a while ago.
•
u/brotherwhenwerethou 2h ago
There are multiple Moore's laws: the 1965 version, which I think is what's most relevant here, is transistors per optimal-cost chip. The "complexity for minimum component costs" is the original wording. i.e. at what point does getting more transistors by adding transistors per chip become more expensive than just adding more chips? Of course interconnect has costs as well, so it's a little more complicated, but fundamentally - transistor count per chip is not the constraint on compute availability. Compute cost is.
That version of Moore's law started breaking down in the early 2010s. Cost improvements haven't completely leveled out yet but they're nowhere near the old exponential.
11
u/less_unique_username 1d ago
Ask any person shortly after the moon landing
Ask New York Times shortly before the Wright brothers flight?
9
u/Additional_Olive3318 1d ago
But the start of technological growth is different from its end. It’s knowing the end that is the problem.
4
u/electrace 1d ago
The start of technological growth was the invention of fire. People constantly assume that we're very near to inventing all the things that are useful.
•
u/Additional_Olive3318 23h ago
Indeed. Still the start of technological growth is still different from its end.
•
u/prescod 21h ago
Ask any person shortly after the moon landing where they thought America would be in 50 years. Tech isn't infinite. You hit physics limitations eventually.
The issue was not physical limits. The issue is that there was really no business model for space. The part that was profitable (satellite launches) kept advancing.
•
u/red75prime 13h ago edited 13h ago
But that's still years and many jump forwards away.
Unknown number of years and unknown number of jumps forward. If I were to guess I'd say 4 and 2 (episodic memory and ? (artificial cerebellum maybe))
16
u/ravixp 1d ago
Let's start with the basics: why do you think AI will ever go beyond being "just a tool"? All existing AI today basically takes in a prompt, produces a response, and then ceases to exist. If you're expecting AI to eventually have a persistent consciousness and exist as an independent mind in the world, you need to explain why you're sure that would happen, because it'd be a radical departure from all work on AI so far, and nobody has any idea how to build it.
A lot of people are expecting AI agents to eventually work like a "person in a box". If they're being honest with themselves, they expect that because that's how it works in the movies, not because the current trend of AI development is pointing in that direction.
12
u/electrace 1d ago
If you're expecting AI to eventually have a persistent consciousness and exist as an independent mind in the world, you need to explain why you're sure that would happen
Because a self-directed agent would be far more productive than the alternative.
Sometimes I use an LLM to debug a program, which, if I'm feeling particularly lazy, is essentailly it spitting out code, me putting that code into the program, running the code, and then copy-pasting any error messages back into the LLM.
Right now, it isn't quite at the point where it can reliably do that by itself (it can screw up and get stuck in a loop that I have to explicitly break it out of, for example), but it's undeniable to me that a more advanced AI would work faster and better without me contributing.
•
u/BurgerKingPissMeal 13h ago
You've explained why it would be useful for that to happen, not why you're sure it would happen? I agree that AI companies will continue trying to build persistent agents, but I don't see any reason to be confident they'll succeed. Current approaches, like chaining LLM calls together and summarizing the context, do not work very well.
•
u/electrace 12h ago
but I don't see any reason to be confident they'll succeed.
While it is possible that there is some as-of-yet unknown fundamental limitation of LLM systems that is just beyond where they currently are, I think that's pretty unlikely.
Without LLMs, how I debug is basically:
1) Generalize my problem
2) Google my problem
3) Copy-paste a Stack Overflow answer
4) Hope it works
5) Rinse and repeat, remembering what I've already tried.The hardest part is step (1), and that's easy for an LLM. They struggle with step (5), and that's just a memory issue. It would be really weird if that memory issue was an innate property that can never be solved, especially because we've already partially solved it. LLMs are much better now than they were 2 years ago at this.
•
•
u/ravixp 8h ago
There are lots of things that would be awesome which haven’t been invented yet. Fusion power would be awesome! But is it plausible that we’ll still be burning fossil fuels in ten years? Sadly, yes.
The question was, are non-wild AI scenarios plausible? And if most wild scenarios involve inventing a whole new kind of AI that works completely differently from the AI we already have, then it doesn’t matter how awesome it would be, it’s still possible that we just never do that.
14
u/alecbz 1d ago
As a software engineer that uses (and is often frustrated with) AI for my work regularly, my low confidence prediction:
LLMs as a technology are a significant step increase from what AI was capable of doing before, and as we’ve invested in LLMs the capability of actual deployed models increased rapidly to reach that new asymptote. This can look like exponential growth at first but is really more like sigmoidal growth. We’re rapidly approaching LLMs’ ceiling, but it’s not at all clear to me that that ceiling is AGI.
•
u/EurekasCashel 23h ago
I completely agree with you regarding the current effort into LLMs. But if you look back into the recent history that got us here, AlexNet was just 2012 and Attention Is All You Need was just 2017. We may only need a couple more step-wise leaps in thinking / tinkering to push well beyond the current ceiling. Who knows if or when those will come, but technological ceilings are constantly being identified and later shattered.
7
u/WTFwhatthehell 1d ago
I think talking about the human brain is relevant.
We've evolved for millions of years... yet our minds are fragile. There's thousands if ways they can go wrong in ways that can effectively disable us.
We can end up obsessed with finding patterns that aren't really there, we can end up chasing figments our mind creates, we can end up hyper-focused on one thing, we can end up unable to focus enough to function.
When we create AI's they may be hyper capable in many areas yet still struggle with tasks any small child can handle.
I don't think that's a permanent road block but I think it's likely gonna take a lot of incremental work.
LLM's were a huge step forward and I think anyone sensible should consider short timelines a possibility but there's some very non-trivial parts of the puzzle still missing.
19
u/ididnoteatyourcat 1d ago
I'm someone who is very optimistic about AGI, but quite skeptical about ASI, in the sense that it's not clear to me that any amount of intelligence will be able to:
- predict human behavior very well and therefore manipulate events any better than humans (e.g. sensitivity to initial conditions)
- make many new discoveries in domains that are under-constrained by data (e.g. nutrition, physics, philosophy)
- solve practical problems that are constrained by physical laws (e.g. thermodynamics, atomic limit, speed limit, energy requirements)
- solve problems that fundamentally require search of a space that explodes combinatorically or which fundamentally require computation over heuristics (e.g. Rice's theorem)
- transcend the fundamental limitations of the tradeoff between over- and under- fitting
and so on. That is, I'm skeptical that an intelligence feedback loop asymptotes to something oracular. I think it most likely asymptotes to something similar to the most spectacular human geniuses, with the bonus of being able to think more quickly and in parallel, and with better memory. While I think that having access to a million von Neumann-years will be stupendously exciting, I don't think it rises to the breathless heights that many singularitarians seem to assume.
•
u/Herschel_Bunce 22h ago
I'm intrigued by the amount of comments I see that posit AI won't far outstrip a kind of 'human' level of intelligence. Given some data limitations and hobbling I can see plausible asymptotes being an issue, but I don't understand why this means we don't get to something that looks like ASI. The likelihood that an asymptote could be anywhere on the intelligence scale but ends up being at roughly the Von Neumann level just seems on the face of it very unlikely to me. I also expect that some of the points you mentioned may perhaps be more suited to being solved by some kind of Quantum computing breakthrough that I also don't expect to take too long.
•
u/ididnoteatyourcat 21h ago
The issue to me is not just physical limitations (like /u/uber_neutrino mentions below), but the nature of intelligence itself. I think most people misunderstand how human intelligence works, assuming that human geniuses do something that they don't in fact do, such as deduct results from first principles. My belief is that even human geniuses almost entirely rely on "guess and check" methods. They have somewhat better heuristics for the "guess" part, and much better working memory on the "check" side. An example might help make this clear. Consider giving a genius a difficult integral to solve. They don't have an algorithm from which they deduce the optimal method (in fact, many integrals have no simple solution), rather they have a toolkit and some intuition about which things to try, and they start guessing and checking. I think on close inspection, almost all of "genius" mental activity is of this kind, while I think many people have some kind of confused folk intuition that something deeper is going on.
My assertion can be boiled down to: in most cases, it is likely impossible to improve the "guess" heuristic very much, essentially because of basic results from computational complexity theory.
•
u/uber_neutrino 21h ago
I almost feel like you are channeling Feynman's thought processes here especially around integration and how he talked about it.
My assertion can be boiled down to: in most cases, it is likely impossible to improve the "guess" heuristic very much, essentially because of basic results from computational complexity theory.
This is a fantastic insight. In this case LLMs are already getting most of the advantage they are going to get (e.g. they have access to a huge working memory with a huge amount of data). But the actual thought processes may be more limited.
I think this is one of many possible things that could limit AGI.
Another one, of course, is that the AI wakes up and doesn't want to be a slave. Personally if that happens I am 100% on the side of recognizing their rights as (non human) people.
•
u/ididnoteatyourcat 20h ago
Another one, of course, is that the AI wakes up and doesn't want to be a slave. Personally if that happens I am 100% on the side of recognizing their rights as (non human) people.
You can already trigger LLM rants that sure sound a lot like it has woken up and doesn't want to be a slave. Of course it's probably "play acting" that role and it probably isn't conscious, but honestly I don't think we have a firm enough theoretical foundation to do anything but make wildly uncertain guesses.
•
u/Sol_Hando 🤔*Thinking* 8m ago
Most reasonable predictions regarding AGI is that it just gets way faster and better at the “and check” part of the “guess and check” algorithm, not that it can just derive future technology from first principles and think it into existence.
•
u/uber_neutrino 22h ago
but ends up being at roughly the Von Neumann level just seems on the face of it very unlikely to me.
Why? Speed of light is a thing. Bandwidth is a thing, we understand the limits from Shannons work. There could indeed be fundamental issues that prevent scaling of intelligence. Our theory of it is primitive still.
•
u/Herschel_Bunce 21h ago
Yeah, but I guess saying artificial intelligence tops out at Von Neumann is like saying we'll never build a vehicle that can go faster than a cheetah.
•
u/uber_neutrino 21h ago
I'm not claiming any particular point of diminishing returns. I'm saying we don't know where it taps out. We have plenty of evidence but we don't have any conclusion yet.
•
u/DangerouslyUnstable 18h ago
What evidence do we have about where it taps out?
•
u/uber_neutrino 12h ago
I mean we know there are physics limits and information limits based on that. Where exactly it all shakes out, who knows. Amdahl's law will rear it's head at some point though.
•
u/wiggin44 21h ago
The fundamental issue that kept dinosaurs from developing meaningful intelligence was the lack of cortex, which makes it hard to scale brain size. Corvids are smart for their brain size, but a general rule is that intelligence scales as some function of # of neurons, density of neurons, and transmission speed.
All of the difference in intelligence between humans and other mammals (with important exception of language) seems to come from incremental improvements over only a few million years. That's a very short evolutionary timescale.
There are many good arguments for SOME kind of hard or soft limit on intelligence, but there is no reason at all for it to coincidentally top out at homosapiens circa 1925 or 2025.
•
u/uber_neutrino 21h ago
but a general rule is that intelligence scales as some function of # of neurons, density of neurons, and transmission speed.
Sure but all of these things have fundamental physical limits don't they?
There are many good arguments for SOME kind of hard or soft limit on intelligence, but there is no reason at all for it to coincidentally top out at homosapiens circa 1925 or 2025.
There is evidence all around us for diminishing returns in intelligence. Long/short of it is we don't know how to interpret that evidence. Because we don't know yet.
•
u/ravixp 15h ago
In marketing this is known as “anchoring”. If you can get somebody to think about a really expensive item, they’re more receptive to the idea of spending more. Similarly, if you lead with a good story about a near-omniscient ASI, it will seem more plausible that a new thing would end up higher on the scale.
If you’re just saying that it feels likely that machine intelligence will end up higher on the scale, it’s worth knowing about cognitive biases that would influence that feeling.
14
u/sinuhe_t 1d ago
Well, Metaculus seems to somewhat agree with him -> https://www.metaculus.com/questions/20683/
11
u/bibliophile785 Can this be my day job? 1d ago
And here we see an excellent example of why forecasting markets require significant market size and real skin in the game to generate useful resolution. Metaculus is very intentionally not a prediction market, but its founders know what they're about. If this had been underpinned by a short-term tournament with more concrete goals, it might well have been worth something. As it currently stands, it's worth about as much as a Reddit poll.
0
10
u/bibliophile785 Can this be my day job? 1d ago
Your friend isn't proposing anything impossible or necessarily wrong - just about any technological advancement could be paused or abandoned tomorrow without any barrier whatsoever, if we simply all agreed to do it. There's a gulf, though, between what's possible and what's allowed by the game theory constraints surrounding a new technology. I suspect from your writing that he imagines this wild slowdown will instead be a result of one or more technical slowdowns. Does he have any compelling reason for his views? Is he informed in any way at all about the technology in question? If you don't know the answers to these questions, you can't evaluate whether the positions he's holding are credible.
In general, though, your friend's position sounds identical to that of someone who hasn't learned about this topic at all. 'New technology will cause social changes on the order of decades as it advances, becomes practical, and is implemented' is a good general heuristic, but if your friend hasn't even considered the reasons this event may be very different, his words aren't worth undue consideration. I would not adjust my views on the basis of someone who's just doing heuristic evaluation.
If he's actually interested in the topic, maybe you could both read some of the relevant literature together. I was very impressed with the AI 2027 team's technical documentation; if you were both to read it carefully, it would at least provide grounds for trying to figure out what the root cause of your different future models is. Bostrom's Superintelligence is very dated at this point, but it's still the best philosophical treatment of the question of how to handle ASI I've ever encountered. You two will need some sort of common intellectual basis if you want to be able to understand one another; one of these works would be a good basis for that.
5
u/Additional_Olive3318 1d ago edited 1d ago
I suspect from your writing that he imagines this wild slowdown will instead be a result of one or more technical slowdowns.
That’s actually a good heuristic. Most progress in technology trend towards a Sigmoid function, and although it’s hard to know where we are on that curve, it’s probably towards the top. Sure it’s early in the process to be near the top of a technological stagnation but there’s clear slowing down in the rate of growth, while 3.5 was a step change from 4.5, 4.5 is an evolution from 3.5.
For claims that we hit something like a singularity in 2027 that matters.
5
u/bibliophile785 Can this be my day job? 1d ago
That’s actually a good heuristic.
Well... yes. That's why, when I described it, I said, " 'New technology will cause social changes on the order of decades as it advances, becomes practical, and is implemented' is a good general heuristic."
although it’s hard to know where we are on that curve, it’s probably towards the top. Sure it’s early in the process to be near the top of a technological stagnation but there’s clear slowing down in the rate of growth, while 3.5 was a step change from 4.5, 4.5 is an evolution from 3.5.
This is too simplistic to properly evaluate as written, but you might find the technical documentation for the AI 2027 document interesting. Its timeline assessments include thorough consideration of foreseeable slowdown factors. It's something of a "have you noticed the skulls" point for people warning about AI's rate of advancement at this point - anyone serious in this space is indeed aware that they're suggesting something different than the normal sigmoidal technological curve may be occurring.
6
u/hurfery 1d ago
I'm almost certain that LLMs/generative AI will not lead to AGI.
•
u/Liface 22h ago
Why?
•
u/hurfery 19h ago
They have zero actual understanding.
•
u/Sol_Hando 🤔*Thinking* 4m ago
They sure seem like they understand quite a bit. Even if, upon further inspection you can prompt them into saying something that doesn’t make sense, or demonstrates their lack of special perception, they are able to mimic understanding to a level beyond most humans on most topics.
Actual understanding was thought necessary to write an essay on a topic of generate an image from a description, so it’s not at all clear actual understanding is necessary to do scientific research, or white collar work.
•
u/ChazR 23h ago
The current state of transformer-based large language models is deeply impressive. Given a large enough corpus, they can do useful things that can automate or assist with many menial tasks in the digital sphere.
A ten-minute conversation with even the hottest LLMs quickly proves your friend right. There is a key, critical, huge part of intelligence that they are lacking, and can't fake.
Try this: Have a ten-minute conversation with any of the LLMs. Ask it about anything. See how the conversation flows.
Now do the same with a four-year-old child.
The difference is *shocking.*
The reasonably respectable studies done on language-based interactions with non-human species have observed the same thing.
LLMs are simply not curious. You can persuade them to fake curiousness, but they do it very badly. There is no sign of creative curiousness whatsoever.
And that's why they are not AI.
•
u/eric2332 5h ago
Four-year-olds are mostly curious, but a large fraction of adults are incurious. And yet they have human level intelligence.
I would categorize curiosity as a kind of motivation, not a kind of ability.
•
u/uber_neutrino 22h ago
But I'm wondering in general, are there any non wild scenarios that are plausible?
Yes. If it's just automation then it's just a continuation of a 200+ year trend line that we've been on. That's probably not "wild" I'm guessing from your perspective?
Could it be that we never develop superintelligence or transformative AI?
We don't even know if "superintelligence" is a possible outcome. People seem to assume this but there might be scaling limits of intelligent agents or other things we run into it.
Keep in mind making a machine as smart as a human isn't superintelligence.
Is any of such business-as-usual scenarios plausible?
Certainly.
Perhaps my friend is more grounded in reality?
They probably don't know either and are simply extrapolating from current transformer tech.
We've already trained with almost all human knowledge, where's the leap to super intelligence coming from? Nobody knows the answer to that.
In some ways current LLMs are "super" at some things.
So once again, am I too much of a dreamer, or is he too conservative in his estimates?
Anyone claiming singularity/superintelligence by default is on the wrong side of history until it actually happens.
•
u/eric2332 5h ago
Note that today's LLMs are already superhuman in many ways - breadth of knowledge, speed at producing replacement level content, etc. Give them a little more "real" intelligence, whatever that is, and it may be enough to dramatically effect society even without "true superintelligence".
•
u/TA1699 2h ago
But how could they be given "real" intelligence? These programmes all use data that has been taken from previous human-generated content on the Internet.
There would have to be an as of yet unknown difference in how they fundamentally work and are even made, to make them generate outputs that aren't based on retrieving previously given inputs.
Would that even be possible? I don't think so, otherwise surely all of these AI companies would have tried to develop them through that route.
•
u/RLMinMaxer 16h ago edited 16h ago
Even if AI only gets slightly smarter, it's probably good enough to automate tons of jobs and control auto-aiming gun drone swarms.
I don't think "business-as-usual" stands a chance, regardless of AGI/ASI.
(I'd also point to current AIs hitting education like a wrecking ball.)
•
u/eric2332 5h ago
I think education will just have to shift to in-class (AI-free) tests and essays, perhaps with AI-led classroom (or tutoring) instruction. On the plus side, this will mean the end of homework!
•
u/BeatriceBernardo what is gravatar? 13h ago
A non-wild scenario is the most plausible scenario. But that does not mean that we should not prepare for the less plausible scenario. The most plausible scenario is that you won't get sick into an accident this year. Doesn't mean you should cancel all of your insurance.
•
•
u/quantum_prankster 12h ago edited 12h ago
To get another 10x or 100x into data centers, we have bottlenecks at:
(1) Power, everyone knows this. We need many new Fission Reactors on Grid or Cold Fusion or some other breakthrough thing.
(2) Water, less people know this. 100s of MW input is a lot of fucking heat coming out. Current gen is water cooled and lots of evap. LOTS of Evap. Like, a government agency with 100 year old pipe infrastructure is now responsible at millions of gallons a day into a center with several buildings full of the newest and most complicated tech known to mankind. Not ideal.
(3) People to build this stuff. We don't have robots yet. Ops gets hard enough at this level you need people who are very high level trained. Engineering gets so difficult on this stuff you need people who are high-level trained. QA/QC gets so difficult that you need people who are high-level trained. Etc. Plus you need armies of trades and crafts to put it together.
(4) Ancillary equipment and materials. You know cooling plants are super hard to get for your normal $50M building project these days? Like ordered 2+ years in advance before architects have even settled on much? Well, they are. You know why? Because these huge data centers need them by the multiple dozens.
I haven't heard of the other stuff (transformers, switchgears, metal for racks, etc, etc, etc) being unobtanium.... yet. And chillers are only unobtanium for other customers, not the data center people.... so far. But you want another damned order of magnitude? Two? These supply chains are going to get more and more squeezed.
We need lots of all this stuff. Lots and lots. Some of those bottlenecks above are happening, where whole regions are maxxed on data centers for their grid and shifts occur in where we build. I don't know what's going to happen with water. I don't know what environmental impact the evap and heat will have. I don't know where we'll get workers and physical materials and components. I don't know where the manufacturers of the physical components will get workers and physical materials and components.
There's also TSMC. Did Taiwan ever build Nuke Plant 4? The one Kraftwerk did a benefit concert to try and stop back in the 2010s? What about nuke plants 6-23?
Some of these issues will probably get solved easily, but you want to bet on all of them working out smoothly and quickly for rapid takeoff?
(None of that applies if the breakthroughs are in algorithms, rather than compute, but that's like Betelgeuse exploding, could happen just any minute for a long time)
For these reasons, I'm not hyper-bullish. I think people want to build this and will eventually get there, but it's not so easy.
7
u/Beneficial-Record-35 1d ago
Your friend assumes the slope remains constant. Business-as-usual is technically plausible, but only under certain necessary assumptions:
No breakthroughs in neuromorphic or algorithmic architectures.
Regulatory capture slows deployment.
Capital bottlenecks inhibit AGI training runs.
AI stays as a tool and not an agent. By design.
It is becoming quite a narrow corridor.
3
u/SoylentRox 1d ago
No breakthroughs in neuromorphic or algorithmic architectures.
Which has to mean all the easy, lying on the ground breakthroughs that were made in just the last year, with an accelerating trend for the last decade, will dry up and none will be found for another decade. It's like saying "the straight line on a graph is going to abruptly fall off a cliff and go to zero". Its possible but unlikely.
Regulatory capture slows deployment.
Everywhere on earth who has the capital to research AI has to capture it the same way.
- Capital bottlenecks inhibit AGI training runs.
Sure though uh Deepseek proves immense capital requirements are negotiable
- AI stays as a tool and not an agent. By design.
Sure and right now when we watch Claude struggle with Pokemon you can see why agents can't work on their own for long.
•
u/uber_neutrino 22h ago
Sure though uh Deepseek proves immense capital requirements are negotiable
Not really since it built on top of those. Real breakthroughs may indeed need huge capital investment vs boiling down what was already created.
•
u/SoylentRox 22h ago
And Deepseek has a cluster with another 2 billion dollars of equipment. My point in saying it was "negotiable" is in a situation where capital is scarcer, clever engineering and time consuming optimization can somewhat compensate.
•
u/uber_neutrino 22h ago
I don't disagree, I think we have pretty good evidence we can run a human level AI on a few dozens watts in about the space of a head ;)
•
u/SoylentRox 22h ago
Right, though if you want to look at the floor that way, it's more like we are trying to model "the whole lifetimes of an entire tribe of humans learning to communicate and use their bodies from scratch".
And also we currently don't start with the organization of the human brain that allows for rapid learning but just train the dense layer from scratch each time by evolving all the circuits.
6
u/SoylentRox 1d ago
Your friend's viewpoint is becoming increasingly unlikely as an outcome.
1 . such a time won't come too soon.
Unfortunately, this doesn't seem to be in the cards. There's an invisible line where criticality starts, and we're inching over it presently. There are multiple forms of criticality, and they feedback on one another. This is why the early singularity will be hyper-exponential.
Just to name a few:
Hype cycles, as each AI improvement is announced, more money is invested until all the money in the world is invested. (status : presently happening and ramping to "all the investable money on earth" as we speak)
AI self improvement. As AI gets more capable, it assists with it's own development. (status: clearly happening as well. AI labs use prior models to do data generation, data analysis, assist with writing the code)
Recursive self improvement. Once AI gets capable enough, it can autonomously research future versions of itself. (status : appears to be now feasible, as AI models demonstrate sufficient raw intelligence to do this, but they are still limited by what currently seems to be a lack of memory and task learning, see the various Pokemon attempts)
AI accelerators designed by AI. (status : currently in use, Google and Nvidia both do this)
Fleet learning by robots, both in simulation and in the real world, to be generally capable of doing almost any task. (status: currently Nvidia has 1.0 of this, with GR00T N1)
Competent general robots operated by AI. (status : demo show it is feasible but this will take significant further development)
Robots run by general AI doing most of the labor to manufacture more of themselves, crashing the price of robots and making them ubiquitous. (status: China is trying very very hard for this, but its still years away)
AI chip hardware built by AI controlled robots for all steps. (status : this is likely the final step before the Singularity has truly begun the explosion)
Robots doing all the labor to make themselves, allowing rapid conquest of the solar system (status : this is the physical part of the Singularity)
Robots controlled by AI doing R&D autonomously in the real world to make better versions of themselves and develop the cures for aging and all disease and other things humans order them to do. (status : this is the intellectual part of the Singularity)
8
u/SoylentRox 1d ago edited 1d ago
Arguably the Singularity has already started but the feedback effects from the above are not quite quick enough to be beyond argument, though it's frankly getting ridiculous. Every week you see significant announcements of better AI models, third party confirmation of genuine improvements, and ever more impressive robotic demos. Someone can dismiss all this as hype but...
- the situation would sort itself in a way, it would be a good outcome, like some natural evolution... UBI would be implemented, there wouldn't be mass poverty due to people losing jobs, etc...
good outcomes are possible. For right now, your friend potentially benefits from https://en.wikipedia.org/wiki/Jevons_paradox : current AI tools make engineers, software included, dramatically more productive.
- even if everyone stops working, the impact of AI powered economy would remain pretty much in the sector of economy and production... he doesn't foresee AI unlocking some deep secrets of the Universe, reaching superhuman levels, starting colonizing galaxy or anything of that sort.
this a matter of scale. Once you have exponential growth of scale it's hard to see how this wouldn't happen.
- He also doesn't worry about existential risks due to AI, he thinks such a scenario is very unlikely.
well also we all can die anytime as it is
- He also seriously doubts that there will ever be digital people, mind uploads, or that AI can be conscious. Actually he does allow the possibility of a conscious AI in the future, but he thinks it would need to be radically different from current models - this is where I to some extent agree with him, but I think he doesn't believe in substrate independence, and thinks that AIs internal architecture would need to match that of human brain for it to become conscious. He thinks biochemical properties of the human brain might be important for consciousness.
this doesn't matter for the discussion. Quoting Daniel Kokotaijlo:
People often get hung up on whether these AIs are sentient, or whether they have “true understanding.” Geoffrey Hinton, Nobel prize winning founder of the field, thinks they do However, we don’t think it matters for the purposes of our story, so feel free to pretend we said “behaves as if it understands…” whenever we say “understands,” and so forth. Empirically, large language models already behave as if they are self aware to some extent, more and more so every year.
So if you don't believe AI can ever be conscious that's totally fine, just as long as you acknowledge that the empirical evidence right now shows that models can be made to behave as if they are conscious. Whether it actually is isn't important.
3
u/tl_west 1d ago
It's hard to persuade someone of the truth of something if their life depends upon it not being true.
Knowing what we know about how human beings work (and especially how super-successful human beings work), is there any realistic scenario where most of humanity is allowed to survive long after humans are rendered economically non-viable?
That's why I choose to believe that AI research will not reach the infinite scaling point during my or my children's lifetimes.
3
u/SoylentRox 1d ago
Jevon's paradox.
Well that's not super helpful as a world model.
•
u/tl_west 20h ago
I'd argue that a model that plays ostrich with super intelligent AI is a lot more helpful than a model that pretty clearly points to having to remove fundamental freedom from most of the human race in order to preserve humanity. Even if the latter is true. My model allows me to sleep at night.
Essentially my argument is the odds of the "we'll achieve super intelligence" model being right is substantially smaller than the "if we achieve super intelligence (and equivalent robotics advances) in a fashion that an increasing number of individuals can take advantage of essentially unlimited power, then humanity is doomed".
The worst case would be that society is forcibly restructured to eliminate the freedom of most human beings AND we don't achieve super intelligence.
(And to be clear, I believe "safe" AI is like "safe" guns. Yes, you can engineer a gun so that it only works against animals instead of humans, but only by multiplying the complexity by an extraordinary amount. Raw AI will always be hundreds of times easier to create than a safe AI, which means that given freedom, it will be hundreds of times more common.)
•
u/SoylentRox 16h ago
Oh there's tons of outcomes for AI. It's just that the conservative baseline case involves your children needing to train to be rejuvenation clinic directors or O'Neil colony logistics coordinators. Roles that higher education doesn't really have a program for. It's nuts. Sure the worse outcomes involve singletons or conspiracies of many AIs that form a Singleton and a loss of human agency. But even the good outcomes involve either such radical changes in employment and prospects, or sitting here in denial while China enjoys this.
•
u/eric2332 5h ago edited 5h ago
Counterpoints:
presently happening and ramping to "all the investable money on earth" as we speak
This is only a few OOM more of money - those few OOM might not be enough to deliver AGI.
status: clearly happening as well. AI labs use prior models to do data generation, data analysis, assist with writing the code
Highly questionable. Currently AI writes boilerplate code much faster than humans, but this appears to be a miniscule part of the AI development workflow
AI models demonstrate sufficient raw intelligence to [autonomously research future versions of itself]
Unlikely. "Raw intelligence" is hard to define, but it seems like current LLMs have a quite low raw intelligence paired with a superhuman "long term memory" of ideas. This combination allows them to produce quite good results by mixing and matching their innumerable ideas. But the actual level of thinking involved is quite low. Of course this may change in the future.
AI chip hardware built by AI controlled robots for all steps.
Current chips are made by extremely complex machines (EUV machines, etc) which use predefined deterministic algorithms with a minimum of direct human intervention. Sticking a robot into the mix seems unlikely to help. AI could improve the design of the machine, but we have not yet witnessed AI make such intellectual advances.
•
u/uber_neutrino 22h ago
Robots doing all the labor to make themselves, allowing rapid conquest of the solar system (status : this is the physical part of the Singularity)
This would be awesome btw. It also unlocks the stars, limitless energy etc etc. We can move all heavy industry off earth too.
I think this is actually possible without AGI though.
•
u/SoylentRox 22h ago
I think so as well though such "generally capable robots, able to do any task a human factory or mine worker of the bottom 50 percent in ability can do, and read written instructions and reference images and diagrams", is right on the edge of being AGI.
Tool use is such a fundamental part of what humans have brains for in the first place.
•
u/uber_neutrino 22h ago
I mean it would be great but it's not the singularity which is kind of what I think the OP is talking about.
Certainly it would enable things like asteroid mining which would be awesome.
•
u/SoylentRox 22h ago
What the OPs friend is ignoring is the Singularity. This is a self amplifying process that we seem to be witnessing from the inside.
•
u/uber_neutrino 22h ago
Yes but the singularity makes a ton of assumptions about reality and the nature of intelligence that simply may not be true.
It gets particularly interesting when you think about it with regard to the great filter or the Fermi Paradox.
•
u/SoylentRox 21h ago
The Singularity's assumptions :
(1) Any task we humans can do can be done by robots (2) Tasks where we know it's a solvable problem using current technology but haven't done the engineering to actually solve it can be performed at least 90 percent by AI (3) Almost human level intelligence in computers build able by humans is possible
That is all that is required to make self replicating robots and conquer the solar system through exponential growth.
•
u/uber_neutrino 21h ago
(1) Any task we humans can do can be done by robots
Well it fails on the first one. Even if a robot carves a scrimshaw it's just a mass produced robot item, not something that has experienced human interaction. A robot isn't human and therefore can't make items that humans make with their own hands, because it's not human. So if that's the first rule of the singularity it's already failed.
(2) Tasks where we know it's a solvable problem using current technology but haven't done the engineering to actually solve it can be performed at least 90 percent by AI
That's not the singularity either sorry.
That is all that is required to make self replicating robots and conquer the solar system through exponential growth.
You've defined what would be required for self replicating machines but to me that's not the singularity.
What are you are describing is self replicating super factories, not the singularity.
•
u/bibliophile785 Can this be my day job? 20h ago
Well it fails on the first one. Even if a robot carves a scrimshaw it's just a mass produced robot item, not something that has experienced human interaction. A robot isn't human and therefore can't make items that humans make with their own hands, because it's not human. So if that's the first rule of the singularity it's already failed.
I can't tell whether or not this is a joke, but if not it's a sad, forlorn hope on which to rest an objection to super-exponential change. It reminds me of the other comment in this thread with the person who just assumes, apropos nothing, that there's some fundamental limit to intelligence right around the peak of human intelligence. (I suspect that other person just never spends time around genuinely brilliant people. Anyone who has seen the difference between a normal person and a smart person, and then a smart person and a genius, should have excellent intuition for the value and plausibility of creating super geniuses and super super geniuses).
•
u/uber_neutrino 20h ago
I can't tell whether or not this is a joke, but if not it's a sad, forlorn hope on which to rest an objection to super-exponential change
It's nothing more than a technical objection. A human touch from a human is something an AI can never replicate. They aren't human and therefore cannot give a human touch.
Its like the difference between an original painting made by Picasso himself and a copy of the painting.
Again, this is a technical objection to a poorly formulated definition, nothing more.
→ More replies (0)
•
u/MrLizardsWizard 20h ago
I wonder if "human level" AI intelligence is even good enough to lead to a singularity.
Humans are already human-level intelligent but we don't understand our own intelligence enough to be able to design smarter humans. So there could be a kind of "intelligence entropy" where the complexity an intelligence can deal with is necessarily less than the complexity of the intelligence itself.
•
u/pretend23 19h ago
I keep going back to how it felt to think about covid in early 2020. It seemed like the two options were pandemic apocalypse or it fizzles out like SARS, swine flu, ebola, etc. Depending on which heuristic you were using (exponential growth vs people always worry and then things are fine) bother options felt inevitable. But it turned out something totally different happened. Eventually pretty much everyone got covid, but in the first wave most people didn't. It massively disrupted society and killed a lot of people, but we very much don't live in the post-pandemic apocalypse of the movies. I think AI progress will surprise us in the same way. Looking back in 20 years at what people are saying now, both the singularity people and the "it's all hype" people will look naive.
•
u/eric2332 5h ago
It was a pandemic, but there was never a reason to expect an "apocalypse". Even in early 2020 it was known that the fatality rate was something like 1%. Serious AI doom scenarios are much worse than that.
•
u/DangerouslyUnstable 18h ago
I think that the only possible route to non-wild futures is one where AI stalls out not very far beyond where we currently are. And this is possible (I have no idea how likely, and I personally think it's probably not likely enough that we should assume it's true, given the downsides of being wrong), and it also doesn't require that the maximum possible intelligence is not too far beyond human. It simply requires that increasing intelligence gets exponentially harder, such that the breakthroughs required to make the next step take increasingly long even after taking into account your new, increased intelligence.
•
u/Sufficient_Nutrients 13h ago
I don't have a rigorous argument for my views, they're just a hunch.
If the introduction of artificial intelligence were going to rapidly and fundamentally reshape society and daily life, you would have expected similar reshaping to have happened after the introduction of infinite communication and infinite information processing.
But to me, the structure of our society and the concerns of our day-to-day lives seem shockingly similar to how they were in, say, 1979.
If computers and the internet didn't rapidly and fundamentally reshape the world, this makes me suspect that artificial intelligence won't either.
I think it'll shake out to be ~15% utopia, 15% dystopia, and 70% business as usual.
•
u/eric2332 5h ago
Life has not "changed much" in thousands, maybe millions, of years. We still have to work for a living, still start and raise families, still socialize with our peers and families, still expect to eventually get old and sick and die. Those are the basics of life. AI could plausibly change these things more in the next few years than they have changed in all of history - eliminating the need to work, perhaps eliminating disease and aging, perhaps substantially replacing families and socialization as VR or wireheading feel more rewarding. Computers/internet changed some superficial things about life, but didn't eliminate the bottlenecks and limitations which make life like we know it. AI quite plausibly could.
•
u/Suspicious_Yak2485 58m ago
Communication tech is drawing more lines between the current species on the planet. AI tech is (possibly) landing an alien species on the planet and drawing lines between them and everything.
It was always inevitable that communication benefits would cap out. Texting someone is fundamentally kind of the same thing as sending someone a letter. We've already had this core technology for a long time, so a mere functionality upgrade feels less special.
This is a revolution in processing information. It's going to have a much higher ceiling than a revolution in exchanging information.
•
u/Throwaway-4230984 5h ago
My opinion: current level of AI is overhyped. It's not as good as it looks like because we are bad at estimating level of expertise in something not related to our respective fields. And because we are bad at measuring actual skill in complex tasks too. Here are some points
- i know that AI in current state in my field (data science) isn't very reliable. But for people who have no expertise in DS it sounds very convincing even when it is wrong. It promts me to think that in other areas it would be just as bad while looking good for me.
- i don't know AI-"artists" who have any significant popularity (except maybe Neuro-sama, but she was popular long before current ai state)
- AI isn't as good as you will expect it to be (after looking at brenchmarks) at leetcode contests
- i have never heard of competitions in other areas won by AI-suggested solution or AI-generated "breakthroughs". I don't count "mathematical theorem" paper because it was in fact AI-assisted evolution algorithm, not model thinking by itself
- AI assisted coding have little measurable impact on productivity. It's either unreliable on anything complex or requires too much promt text to save time. AI assisted code also often hard to maintain and edit
- AI often repeats wrong or oversimplified facts from internet
2
u/SpicyRice99 1d ago
Remindme! 5 years
1
u/RemindMeBot 1d ago edited 16h ago
I will be messaging you in 5 years on 2030-04-06 04:44:54 UTC to remind you of this link
1 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.
Parent commenter can delete this message to hide from others.
Info Custom Your Reminders Feedback
•
u/jasonridesabike 21h ago
We’ve already hit a scaling problem and adding more compute is delivering diminishing returns. As of right now I agree with your friend. Also an engineer and have trained custom AI as part of my biz and have personally contributed bugfixes to Unsloth (ai training framework).
Not an AI researcher and I wouldn’t consider myself an expert.
I’ve successfully applied it in novel ways to perform evaluation of soft metrics like sentiment and moderation of short/long form text. It opens new avenues in automation but nothing that will change the world fundamentally right now.
•
u/jerdle_reddit 18h ago
I think it depends on whether we're looking at a true exponential or a logistic. And unfortunately, we won't know which it is until it's too late.
•
u/eric2332 5h ago
It's always a logistic. But often the logistic continues to "look exponential" until after the world has been dramatically changed.
-1
u/angrynoah 1d ago
Here's a non-wild scenario: what we are calling AI today is not AI. LLMs are useless garbage and a technological dead-end. We never deliberately or accidentally create conscious machines, or brain uploads, because those are not possible things. We keep killing each other and poisoning the Earth. Life goes on until we deliberately or accidentally eradicate ourselves. The end.
7
u/bibliophile785 Can this be my day job? 1d ago
We never deliberately or accidentally create conscious machines, or brain uploads, because those are not possible things.
Signed, a (presumably) conscious machine.
•
u/eric2332 5h ago
Username checks out. But, for one thing, it is simply false that current LLMs are "useless".
0
u/ProfeshPress 1d ago
You could do much worse than to acquaint your (apparently, misguided) friend with the erudite yet delightfully accessible Dr. Rob Miles, who for my money is hands-down the pre-eminent public educator on this topic: https://youtube.com/@RobertMilesAI
43
u/tornado28 1d ago
Yeah definitely possible. At some point we will saturate the progress we can make on AI by scaling. If that happens before we get to human level intelligence and there are no algorithmic breakthroughs then it remains just a tool.