r/slatestarcodex Jan 04 '25

AI The Intelligence Curse

https://lukedrago.substack.com/p/the-intelligence-curse
48 Upvotes

94 comments sorted by

18

u/Paraprosdokian7 Jan 04 '25

The article argues by analogy to the resources curse which it describes as "countries that are rich in resources don't need to rely on human capital as much so they don't care about humans".

This is not the conventional explanation for the resources curse. Typically, it is explained by the resources sector hollowing out the rest of the economy. It drives up the local currency making other exports less competitive and imports cheaper. This means it's more expensive for other local industries to sell to foreigners and to locals. That is why the rest of the economy suffers (aside from those directly in the resources industry).

Similar curses exist for tourism-heavy economies and financial services economies for the same reason. Both tourism and financial services are human capital heavy industries.

And it's also controversial the degree to which the curses exists. Australia, Canada and Norway are resource rich countries with some of the highest median incomes in the world. A better formulation is "if you lack strong democratic institutions then the resources curse will hurt your economy more".

The problem then is that the US, which is the centre of AI development, has weak democratic institutions. Again and again, industry has the ability to block reforms that the public overwhelmingly want.

This doesn't happen in, say, Australia. Industry has a lot of power when things aren't in the public eye/newspapers. But when something is in the spotlight and it directly affects people's interests, few governments are dumb enough to ignore the democratic will. Those that do are almost always kicked out.

I would also add that if OpenAI became our corporate overlords and we retained a capitalist society, they still have an economic interest in humans. Who is going to buy their services if every human is devastatingly poor?

(Fwiw, I don't think we'll remain a capitalist society once AGI takes over. Once you corner the market, you don't need a market any more. You just become a dictatorship.)

5

u/yldedly Jan 05 '25

Who is going to buy their services if every human is devastatingly poor?  

If we get a few companies that are extremely productive, couldn't those companies basically buy from and sell to each other? The rest of the population would split off and form its own, much poorer economy. 

Even today, many goods and services exist only for the rich, but there are gradations. If capital owners don't need to buy labor, aren't they essentially self-sufficient?

2

u/EnderAtreides Jan 06 '25

Those few companies would acquire all capital (as that increases profit, which they are competing to maximize), including all property. Some people would be able to maintain enough income to pay for rent and food to survive, others would be homeless and/or die.

Yes, there would be an underground economy. But to the extent that the companies' property rights are enforced, everyone else would be squeezed out of existence.

4

u/tup99 Jan 05 '25

The problem then is that the US, which is the centre of AI development, has weak democratic institutions

That's an interesting way of looking at things. It's all relative. I suspect that most countries in Africa or Latin America would love to have our "weak" democratic institutions. (And 50 years ago it would have been well over half the world.)

2

u/Paraprosdokian7 Jan 05 '25

Yeah, it is relative.

My point was that Congress would be unlikely to protect the interests of the general public in the face of opposition by powerful vested interests. And that's a perception that has widespread, bipartisan support.

3

u/JaziTricks Jan 06 '25

"The problem then is that the US, which is the centre of AI development, has weak democratic institutions. Again and again, industry has the ability to block reforms that the public overwhelmingly want. "

US legislature is default based. so legislation is hard on all directions. one needs trifecta + 60 senators.

why do you call this "weak democratic institutions"?

and the public is generally too dumb for us own good. which is why we have representative democracy and not direct democracy, which would lead to lots of impossible policies.

3

u/Paraprosdokian7 Jan 06 '25

I think you just reaffirmed my point. The default in the US is heavily biased against reform. So if AGI takes over the economy and humans are all thrown out of their jobs, can Congress act to fix the problem? It's difficult to see Congress agreeing to implement a UBI funded by a tax on OpenAI (or whoever has AGI).

In terms of whether the US has weak institutions, it's hard to see how Congress is tackling the big issues. As you say, this is almost by design.

Look at major problems like healthcare reform or mass shootings. There's a bipartisan consensus among ordinary people that there's a big problem. In a proper representative democracy, the representatives would acknowledge there's a problem, realise the popular solution is dumb, then figure out a way to solve it. That doesn't happen in the US any more.

And when it does happen, the other party comes in and repeals it without providing their own solution or the Court strikes it down (e.g. Obamacare, gun control).

SCOTUS is the only apex court whose judges regularly split based on the party who appointed them. They don't just disagree on constitutional issues, they disagree on boring questions of statutory interpretation too. The judiciary is uniquely politicised compared to those in almost any other democratic country.

2

u/JaziTricks Jan 06 '25

the constitution is something I am not totally happy with. because basically whenever the court says "constitution" you can't legislate to fix it like in most other countries.

ie Miranda rights. guns. Roe with abortion. citizen United etc etc.

but taking healthcare, it's not obvious a politically palatable solution exists.

should we legislate federally to reduce doctor monopolies? ie allowing nurses to do many things, and reduce barriers to doctor training?

should we limit healthcare spending on wasteful / too expensive treatment (probably half the spending)?

I'm not sure solving US healthcare is merely a question of political will. it might simply be a very challenging policy problem regardless of legislative will.

2

u/Paraprosdokian7 Jan 06 '25

Every other developed country has a cheaper, more efficient healthcare system than the US. Just pick one of their solutions. They haven't completely solved the problem, but their solution is much better than what happens in the US.

Luigi drew attention to the problem of delay, deny, defend. There's an easy solution to that. Australia makes it a condition of their licence that insurance companies assess claims fairly, effectively and honestly. If the insurer denies your claim, you can take them to an arbitration service (no cost, no lawyers needed).

It does raise the cost a little, but if your insurance doesn't cover your risk, you don't actually have insurance. And Australia's insurance is cheaper than the US.

4

u/eric2332 Jan 05 '25

Who is going to buy their services if every human is devastatingly poor?

Why should anyone buy their services? They can use their own services to accomplish anything they want. They can vertically integrate chip fabs and so on, so that the entire process of producing and using AI can be performed internally, without purchasing from outsiders, so no need to sell to outsiders either.

2

u/icedrift Jan 05 '25

They gave a poor example by picking out OpenAI. Just looking at the biggest companies, how does Apple, Walmart, CVS, UHG, J&J maintain market position with a weakening proletariat? Companies that are focused on scaling up data centers and scaling down B2C strategies will have an efficiency advantage but I find it hard to believe that will outweigh the political incentive to prop up consumer demand.

2

u/Paraprosdokian7 Jan 05 '25

Fwiw, I don't think we'll remain a capitalist society once AGI takes over. Once you corner the market, you don't need a market any more. You just become a dictatorship.

1

u/Feynmanprinciple Jan 07 '25

This is not the conventional explanation for the resources curse. Typically, it is explained by the resources sector hollowing out the rest of the economy. It drives up the local currency making other exports less competitive and imports cheaper. This means it's more expensive for other local industries to sell to foreigners and to locals. That is why the rest of the economy suffers (aside from those directly in the resources industry)

This is also called Dutch Disease, isn't it?

23

u/jawfish2 Jan 04 '25

Beware of reasoning from over-simplified assumptions and observations.

Beware of trusting anything that the big AI companies say, they are playing a money game- 'oh no AGI could kill us all, wanna bet on AGI?'

Tech people are fixated on the idea that intelligence is defined and measured, and equals power.

As a retired software engineer (not working on software) the conjunction of LLM AI and robotics seems to have great immediate promise, after decades of hope and puny results. But agency is a huge, ill-defined problem. Embodiment in hardware is just starting. Safety is not at all well-defined- do autonomous vehicles have to be 99.999% safe to enter the marketplace, or just much better than humans?

Then too, many economists still do not account for energy in their equations. The era of vast computation at the giant AI companies must soon end, because using a substantial fraction of electrical output and making vast greenhouse gases is short-term unsustainable, financially and environmentally. There have been some reports that many people are working on much more efficient models.

Source of truth: just as we used to defer to Encyclopedias and textbooks in the pre-digital days, there needs to be a source of truth for facts that outvotes all other results.

2

u/Thorusss Jan 07 '25

because using a substantial fraction of electrical output and making vast greenhouse gases is short-term unsustainable, financially and environmentally.

I argue it is all sustainable even midterm.

Climate change is measured in decades, and even a certain level of warming will not by itself stop AI progress. Finance still has big room for growth for a few years - people gambling on companies become unicorns in their field, and also actual paid use cases are growing.

Long term, the hope of some people (e.g. former Google CEO Eric Schmidt) expressed the hope that rushing to superintelligence to solve global warming. It is plausible that it could work (many approaches plausible in principle), or we make everything worse (only climate due to energy waste in the best worst case, extinction in the worst worst case). Biggest gamble in history.

Exciting times. I am pretty optimistic.

1

u/jawfish2 Jan 07 '25

There are many arguments I could make for a more pessimistic view, but here we are two people on Reddit trying to predict the future....

So one general area of argument that explains my pessimism: the electric grid world-wide, needs upgrading to support a transfer from fossil fuels. This is a financial/political problem, not a technical one. The cheapest and fastest way to this goal is for all of us to reduce our wasteful usage, and improve our distributed equipment (heat pumps, EVs, batteries, industrial processes) even as we move gas, gasoline, and diesel burners to electrical.

Rapid development of LLMs goes in the opposite direction, the engineer in me balks at the waste. But then I think the current AI development is a tulip bubble built on top of some really interesting tech.

14

u/Spentworth Jan 04 '25 edited Jan 04 '25

Once AI systems that are better, cheaper, faster, and more reliable than humans at most economic activity are widely available, the intelligence curse should begin to take effect. We should expect to be locked into the outcome 1-5 years after this moment.

Given that this is under assumptions, isn't that begging the question?

40

u/unabashed_observer Jan 04 '25

A few points:

  1. There's a natural counterbalance to this in the form of politics. AI can't vote and humans can. If AI is truly destroying that many jobs and dispossessing more and more people of their livelihoods and prosperity, then this will create a reactionary force within politics that seeks to either cap the use of AI or to redistribute the benefits of AI, and this is in the worst case scenario.

  2. The jobs that AI is taking over are white collar jobs that require high cognitive output. Despite our best efforts in robotics, our ability to automate manual labor isn't great outside of incredibly specialized tasks. So if we increasingly automate backoffice functions, we'll have increased demand for blue collar work. I'm actually fine with this. I don't think middling lawyers should cost 500 an hour to create boilerplate legal documents and I think a lot of doctors aren't very good at their job. If we have a truly useful WebMD AI that can replace 80% of doctoring, and the rest is just phlebotomists, surgeons, anesthesiologists, MRI techs, and physical therapists, that's a W in my book.

  3. What's stopping you from investing in these AI companies?

18

u/gwern Jan 04 '25 edited Jan 04 '25

What's stopping you from investing in these AI companies?

How do you invest in OpenAI, SSI, xAI, DeepSeek, Mistral, Sakana, Midjourney, Keen etc or all the AI startups which do not exist yet but may be the final winners (and if they are growing fast, have strong reasons to never sell equity, much less go public)?

6

u/MindingMyMindfulness Jan 04 '25 edited Jan 04 '25

The best would be a listed product that held diversified interests in as many AI companies as possible. But that doesn't exist.

Personally, I invest in public, mega cap US tech. Firstly, because I believe they're broadly the best and most innovative companies in the world (so will constantly pursue any emerging AI tech). Secondly because they already have exposure to some of the private businesses you mention (e.g., Microsoft's minority stake in OpenAI) and thirdly, because they're so well-resourced that they can attract the best talent.

The advantage that the US, and US big tech, have in the AI race is monumental. I'm happy to bet on it.

9

u/gwern Jan 04 '25

because they already have exposure to some of the private businesses you mention (e.g., Microsoft's minority stake in OpenAI)

I didn't mention MS as a way to bet on OA because their 'stake' in OA is a strange thing which is not equity and bounded at a certain amount of royalty repayment, so it is effectively irrelevant to a discussion of long-term equilibria - at least at present, if OA LLC wins all the AGI marbles, MS will be paid back a one-off repayment of $100b or whatever, and its "stake" then disappears forever, reverting back to the nonprofit, and all future profits indefinitely retained by OA or its actual equity owners. (The public benefit corporation transition may give MS an actual true equity stake, which would get a cut of all OA future profits indefinitely, but who knows what will wind up happening with that?)

2

u/[deleted] Jan 04 '25 edited Jan 20 '25

[deleted]

3

u/gwern Jan 04 '25

AI-focused ETFs may actually underweight the AI stonks you assume they have! ('Investing is hard, let's go shopping instead.')

0

u/[deleted] Jan 04 '25 edited Jan 20 '25

[deleted]

0

u/[deleted] Jan 04 '25

[deleted]

3

u/[deleted] Jan 04 '25 edited Jan 04 '25

[deleted]

2

u/gwern Jan 05 '25

Archive.is still works for WSJ.

32

u/JibberJim Jan 04 '25

What's stopping you from investing in these AI companies?

Remember, you don't know which AI company would be successful, and you're also possibly too late, remember other technological booms, like Canal Mania or Railway Mania led to most of the late investors losing lots of money in the speculation, why would AI mania be any different, indeed I expect it to be worse, as the winners vs losers would be even more pronounced.

6

u/unabashed_observer Jan 04 '25

Sure, but unlike all those centuries old speculative bubbles, it's never been easier or cheaper to diversify into a sector or the entire market. Even if you invest in 20 failed startups, 1 super successful one is enough to reap handsome profits.

8

u/aeschenkarnos Jan 04 '25

You’re assuming that people have money to invest, probably because you and all your friends do. Maybe 25% of humanity do, even in first-world nations the rest are three missed pays from homelessness, or homeless already.

2

u/BurdensomeCountV3 Jan 04 '25

You can invest with as little as $1 these days because of fractional shares.

5

u/aeschenkarnos Jan 04 '25

And what kind of return can be expected on that, in what time period?

1

u/BurdensomeCountV3 Jan 04 '25

In percentage terms the exact same returns (effectively) as you'd get on $1 million. Now you can say with small dollar amounts the return in absolute terms isn't worth the hassle to set up the account but these days the hassle for investing is a one time 30 minutes of your life to set up the account. If you can even spare $1 a month (taken automatically from your account through a direct debit) then in 2 years you'll have enough invested that over time the returns are more than worth the 30 minutes you spent on setting the account up (remember, if all you can spare is $1 a month, your time likely isn't worth very much anyways).

5

u/aeschenkarnos Jan 04 '25

So, let's say maybe 10% return. So the $12 you invested turns into $13.20, or a bit more actually because it's 12 months of $1 plus 11 months of $2 plus 10 months of $3 etc etc. I'm going to say that your financial genius has turned your $12 into $14.50. Wow.

3

u/BurdensomeCountV3 Jan 04 '25 edited Jan 04 '25

That's in 2 years. Another year later that $14.50 is $15.95. Give it another 10 years and its $41.37. Another 10 years and now its $107.4 . And don't forget that each year you're putting in an extra $12. When you compound for 50 years (roughly the average working time period for a person) you end up with $17k at the end. And if you put in $2 per month or $3 per month (this is still only 10 cents a day, 5% of the price of a single bus ride in the US) you multiply that 2 and 3 respectively. Not bad for 30 minutes of one time work eh?

11

u/aeschenkarnos Jan 04 '25

You're not seeing how utterly useless this is to people who are already poor or broke now. Investment is pointless below about three months' expenses worth of savings. You're far better off to spend that time somehow getting yourself those savings. You, meaning a specific individual with the capacity to enact some kind of plan to save their specific selves from their circumstances.

Which is a theme I keep coming back to. None of this means jack shit on the systemic level, which is the level on which policy, be it governmental or otherwise, must operate.

The bottom 30% don't need smarmy, patronising advice that they might or might not individually "choose" to take and if not, "fuck 'em", as one of my other correspondents put it. They need provision of housing and healthcare and education at a cost they can afford which is zero. Put a floor on poverty, and the reward is a massive reduction in disease and crime and other social evils.

I don't care if Joe Blow was in prison and on the street and turned his life around and is now a millionaire. I will never care. Good for Joe! I care that 15% of a population are now 5% better off. Joe is a statistical outlier. One thing that especially annoys me about here, this subreddit, is the amount of effort put into valorizing statistical outliers as if they were salient examples of what could easily apply to the entire cohort. It's bullshit. It's specifically self-soothing bullshit, made up after the decision to turn away, to justify the decision to turn away.

→ More replies (0)

4

u/Additional_Olive3318 Jan 04 '25

Everybody can’t make money from this. Clearly. 

1

u/BurdensomeCountV3 Jan 04 '25

Lateness is a decent point, but not knowing which company is going to be successful is easily fixed by buying a portion of all of them rather than putting all your eggs in one basket.

16

u/slug233 Jan 04 '25

3-Mostly you can't, it is closed to anyone without venture capital type money. Millions and millions to start.

7

u/unabashed_observer Jan 04 '25

Alphabet, Meta, Microsoft, Amazon, NVIDIA are all plowing money into AI capabilities and are publicly traded. OpenAI will likely go public in the next few years. Even if the next huge winner in the AI race doesn't even currently exist, there will be some point in the future where you can still get at some point in time to make a reasonable profit, even if you can't personally reap the fantastic ROI that the founders and early investors will get.

6

u/slug233 Jan 04 '25

That is already priced in to those companies at this point, don't forget TESLA which is worth more than all other car makers put together... Not a lot of value left to extract. They are the majority of the market already so you may as well just say put your money in an index fund to reap the rewards of AI.

4

u/k3v1n Jan 04 '25

If musk wasn't "president" I'd consider shorting the stock. Just consider though. There's a really good chance it's worth shorting in about 3.5 years

33

u/ansible Jan 04 '25

There's a natural counterbalance to this in the form of politics. AI can't vote and humans can. If AI is truly destroying that many jobs and dispossessing more and more people of their livelihoods and prosperity, ...

Replace "AI" with "corporations" (which are the first non-human intelligent organisms we have created) and you'll see that what you are talking about has already been occurring for decades.

We live in an era of historically unprecedented prosperity (by some measures), yet we still have hunger, we still have people going bankrupt from medical debt, etc., etc., etc.. Corporations and the wealthy have been concentrating the economic gains to benefit themselves, to the detriment of the rest of society.

While corporations can't technically vote, they have unlimited political speech (Citizens United) in the USA, and wield enormous influence on our country and governance. What makes you believe that we will have any better results on controlling AI behavior than what we have done with corporations?

16

u/AMagicalKittyCat Jan 04 '25

We live in an era of historically unprecedented prosperity (by some measures), yet we still have hunger,

Severe hunger in western nations is just not a common issue anymore. In the US deaths from malnutrition were below 10k a year before Covid. And those deaths are mostly of older people, often in care facilities. There is an issue, but it's not from a lack of food in society.

Even globally deaths from famine are down drastically.

Now of course "not starving to death" doesn't mean not hungry, there's still kids around the world who don't get enough food to grow properly and there's still people who can get sick or suffer but it's a good indicator that food is way more plentiful than before. And we can see this in other metrics as well like the rise in obesity.

0

u/slug233 Jan 04 '25

I've always scoffed at "food insecure" in america. The only people who are "food insecure" are children being actively abused by their parents. That isn't a food issue.

24

u/CronoDAS Jan 04 '25

"Food insecure" and "at significant risk of literally dying from lack of access to calories" are two different things...

3

u/aeschenkarnos Jan 04 '25

Having sufficient cheap crap available to stuff your face to make the hunger pains stop is not “food security”. Humans are not biologically equipped to survive a diet of HFCS based sodas, glyphosate-sprayed grain breads and processed offal “meat”. It shows up as obesity and cancer and diabetes and so on, not as stick-thin limbs and swollen bellies.

3

u/slug233 Jan 04 '25 edited Jan 04 '25

Yeah guess what...You can live a healthy life on just rice and beans a multivitamin and some cheap or free vegetables. No one in the usa needs to eat poorly.

I lived on a food budget of a dollar a day for a month just to see what it was like about 15 years ago, mostly to silence some of my friends bitching about hunger in america.(I spent 90% of it the first day on rice, dry beans, salt, hot sauce, eggs, frozen broccoli, multivitamin, oil) It was super easy and I had money left over, this is without using free food pantries/snap/food assistance/wic/soup kitchens etc... you have to actively try to starve here or have a sub 80 IQ.

3

u/aeschenkarnos Jan 04 '25

In principle "you" can fix any problem "you" face. Great! What about the fifty million other people who have that problem?

Systemic problems require systemic solutions. There is no point in making policy for individual exceptions. If your solution begins with "everyone should just ..." then it won't work. Never in history has "everyone just ..." and they never will.

-2

u/slug233 Jan 04 '25

well fuck 'em then. If you can't survive in the literally easiest time to do so ever in the history of humanity...that is on you.

6

u/aeschenkarnos Jan 04 '25

Cool! "Fuck them." What an original and valid "solution". Have you given a moment's thought to how that's going to play out? Do you think they'll put up with being fucked? What do you think the consequences for everyone are going to be? Can you catch diseases from these people? Do you share motorways with them? If they (for example) accidentally set fire to something, will you be protected from that fire spreading, because you sneered "fuck them"? Do they maybe have jobs that can affect your life and health and livelihood?

I really don't follow this obstinate, aggressive way you people persist against the idea that we're all in this together. Are you just too dumb to understand what the consequences are?

1

u/slug233 Jan 04 '25 edited Jan 04 '25

I'm also a strict determinist. It isn't their fault or anything, it was always going to happen this way.

But anyone who is starving or pseudo starving in the usa right this second is either mentally disabled or mentally ill or too young or old to care for themselves and being actively abused by their caretakers. Unless I have a time machine I can use to unfuck their genome and early life experiences then those people are toast.

I fail to see what I can do about that. I help those in my circle as I can, caring for the whole world is a recipe for despair, evil "greater good" solutions, and genocide.

→ More replies (0)

7

u/unabashed_observer Jan 04 '25

Because corporations aren't dispossessing the majority of people of their wealth and opportunity; they're providing it. That is the main divergence between corporations and AI that the OP's article posits. By any objective measure, the average person today is healthier and wealthier than they have ever been, and corporations have been the reason behind all of that prosperity.

There's a certain strand of college educated liberal who rolls their eyes when sentiments like "What's good for General Motors is good for America and vice versa" are expressed, but most people actually believe that. Our corporations are the envy of the world and generate millions upon millions of highly paid jobs. They are able to do this because they provide services of great value at a reasonable cost. Countries love successful corporations, so yeah, corporate input is generally important at the policy level of government.

1

u/BassoeG Jan 06 '25

sentiments like "What's good for General Motors is good for America and vice versa"

That stopped being true the millisecond cheaper alternatives that hiring American citizens became available, whether outsourcing to foreign slave labor or robots.

4

u/throwaway_boulder Jan 04 '25

What’s stopping me? I’m not an accredited investor so I can’t invest in VC.

Though I think that OpenAI and the others are overvalued. My hunch is that Meta and Google will win the consumer AI, and open source models will be good enough for most business applications (like with relational databases).

6

u/GerryAdamsSFOfficial Jan 04 '25 edited Jan 04 '25

There's a natural counterbalance to this in the form of politics. AI can't vote and humans can.

Politics in the real world doesn't work this way.

Votes rarely if ever meaningfully influence government policy. Marijuana legalization, exiting Iraq/the Middle East, etc have always been broadly popular and yet total fantasies. For example, near-zero UK migration has been the key issue for voters there for 20+ years but immigration has risen. In almost all democracies, when asked if parties will let you vote for what you want, the answer is usually "no" but you get to choose if the answer comes in blue or red. If you want to take this a step further, electoralism has beaten capital exactly zero times since before Lenin.

1

u/eric2332 Jan 05 '25
  1. The day may be coming when AI companies are stronger than governments and can overthrow governments. Even today, one of the AI oligarchs is informally known as "President Musk". Things will get much worse if the top few AI companies both purchase and AI-flood the remaining media (besides Twitter, Facebook, Washington Post etc which are already owned by tech executives).

  2. Manual tasks generally rely on a small set of skills, like visual comprehension and manipulating objects. Current AI cannot do the skills well enough to replace workers, but that may change in the next few years.

  3. I do own stock in Microsoft, Facebook, Google etc but only a tiny amount, not enough to influence decision making.

1

u/Porkinson Jan 05 '25
  1. This is true, the author even points this out as a possible way to avoid this in healthy democracies. However, what happens when other state actors choose not to do this and perhaps have now the ability to outcompete you because they don't need to create a welfare state for millions of economically useless humans?
  2. The economic pressure to solve these issues will make manual labor bend over almost immediately, we are already seeing a lot of investments into humanoid robotics, it simply seems unrealistic to me that we won't have competent humanoid robots being produced less than 5 years after the existence of a proper agentic AGI.
  3. nothing, but you don't know who the winners will be, and even then, what is stopping a (more) evil version of Sam Altman from assuming full control of it and disregarding any social or economical norm? And if he doesn't do it, what if someone else does? As mocked as yudkowsky is for it, at least he offers a (crazy) solution with just bombing data centers.

1

u/BassoeG Jan 06 '25

However, what happens when other state actors choose not to do this and perhaps have now the ability to outcompete you because they don't need to create a welfare state for millions of economically useless humans?

From an economic standpoint, isolationism. I've got an unlimited AGI labor force, why do I have to import anything from forigners. From a military defensive standpoint, MAD deterrence, their robot army might be more numerous than mine because they're spending all their resources on building additional killbots rather than sustaining their population, but nuclear ICBMs make that kinda redundant.

7

u/parkway_parkway Jan 04 '25

Consider a thought experiment, you are one of these AI gazillionaires:

You can press button A which hoards all AI wealth for yourself. You are then utterly hated by everyone else in society who does everything they can to stop you, they vote en masse against you, tax you, try to nationalise your company, try to assassinate you, riot, you are forced to live in a compound protected by robocops and you have to have everyone who comes in screened for weapons.

You can press button B and everyone in society gets everything they need and most of what they want. You completely passify the population with massive production and lives of ease and relaxation, it's like a UBI of $200k per person at todays prices. Everyone loves you and is really thankful for freeing them from their jobs, everywhere they go you're heralded as a hero and can give away as much as you like.

In both scenarios there is 0 difference to your personal lifestyle or projects, you have an unlimited capacity to accomplish anything which you can imagine. If you want a personal moon base pressing B rather than A doesn't slow it down at all.

Which button do you press?

12

u/Raileyx Jan 04 '25

Gazillionaires tend to be addicted to money and power (hence why they are gazillionaires in the first place, the status selects for these traits), so I wouldn't trust them to not press button A. They'd do it just because it'll mean that they're richer in comparison.

Sure, I would press button B. But unlike the people that hoard obscene wealth, I am also not deranged - and that's absolutely what it is with how much damage their selfishness is doing to society.

Most people don't understand how truly unhinged it is to own a billion dollars. For a person to have that much and still want more, while the people that earn this money for them often have so little, you have to be a little sick in the head. The real question isn't which button I would press. It's which button that person would press.

I call even odds. And perhaps that's already too optimistic.

16

u/charizard_monster Jan 04 '25

Zuckerberg’s experience of providing free software with amazing utility hasn’t exactly resulted in universal affection for the guy.

14

u/VovaViliReddit Jan 04 '25

Social networks are designed to be extremely addictive and probably provide net negative utility to the humanity.

3

u/AuspiciousNotes Jan 04 '25

n=1, but I like him for it!

4

u/wavedash Jan 04 '25

I think the problem with this thought experiment is that job losses are not going to just happen overnight (with the press of a button). It will be gradual, whether over years or decades or maybe even centuries. It might be less of a button and more of a slowly-turning crank.

At first, there will ABSOLUTELY be many laypeople defending crank A, and they will support politicians who do the same. Maybe not a majority of the population, but plausibly enough to matter. Maybe UBI gets delayed, or they campaign to reduce how big it is, or they manage to get strict requirements added to it such that it's not really universal anymore (akin to drug testing for welfare). In the long term, this party will almost certainly shrink, but that could take a while.

6

u/aeschenkarnos Jan 04 '25 edited Jan 04 '25

They just had a massive opportunity to choose, and they all chose button A all day long. (Except Mark Cuban, Taylor Swift, and a few others.)

1

u/eric2332 Jan 05 '25

A lot of them joined the Giving Pledge.

3

u/GerryAdamsSFOfficial Jan 04 '25 edited Jan 04 '25

Which button do you press?

Reality is not quite this clear and cooperation often is not rewarded.

In the overwhelming majority of the real world people immediately choose option A without hesitation. Your thought experiment already exists in Rio de Janeiro and South Africa.

2

u/BassoeG Jan 06 '25

You are then utterly hated by everyone else in society who does everything they can to stop you, they vote en masse against you, tax you, try to nationalise your company, try to assassinate you, riot, you are forced to live in a compound protected by robocops and you have to have everyone who comes in screened for weapons.

Two factors;

4

u/jb_in_jpn Jan 04 '25

You're assuming a great deal of compassion for someone in that position were they to press B.

We've got real world examples now - Musk, Bezos etc. - to know how deranging and intoxicating power and never-ending wealth are. They're definitely pressing A.

7

u/parkway_parkway Jan 04 '25

There's also Buffet and Gates and the others who have signed the giving pledge.

-10

u/[deleted] Jan 04 '25

[removed] — view removed comment

8

u/parkway_parkway Jan 04 '25

Bill Gates and Melinda French Gates total giving to the foundation from inception through 2023: $59.5 billion

Warren Buffett total giving to the foundation from 2006 through 2023: $39.3 billion

-4

u/[deleted] Jan 04 '25

[removed] — view removed comment

2

u/dsbtc Jan 05 '25

This is why many of them will never press "B". Gates has spent years and billions of dollars on eradicating disease yet millions of idiots blame him for covid. What motivation do they have for charity if people will hate them for it?

1

u/slatestarcodex-ModTeam Jan 05 '25

Removed low effort comment.

2

u/NotUnusualYet Jan 04 '25

Hypothetically, you only need one multi-trillionaire to press B.

1

u/Porkinson Jan 05 '25

it literally doesn't matter, because there are multiple gazillionaires that have this power, think of many states having the chance of doing this, and it only takes one of them to do it and consolidate power for the rest to be under threat of annihilation or control

2

u/ravixp Jan 04 '25

For some of you, the idea of runaway AGI has permanently damaged your ability to reason about it.

Imagine a general-purpose robot which can build anything. It can even build more copies of itself. Wow, that robot would revolutionize every industry! When somebody invents that robot, it’ll completely upturn the economy! Prepare for a shock!

But, if you’re waiting around for that magical universal robot, you’ll miss the fact that the robotic manufacturing revolution is already happening. It turns out that you can capture nearly all of the value it was going to produce with much simpler robots that are easier to build, and that’s going to happen long before the Universal Robot is invented.

AGI is a thought experiment, a spherical cow, a stand-in for automation in an economist’s calculations. It’s not, like, a real thing that will actually be invented or achieved. It’s a placeholder representing the idea that AI could eventually do any specific task.

If you’ve been steeped in theories that AGI can accelerate AI research itself and start an intelligence explosion, then it makes sense to think of AGI as some kind of inflection point. That’s just a thought experiment, and there’s no particular reason to believe that it’s going to happen. But that kind of thinking primes your mind to see AGI as a significant milestone, with gradual progress pre-AGI and sudden progress after. 

Meanwhile, if you haven’t been poisoned by theories about an intelligence explosion, you’ll notice that we don’t actually need AGI to get most of the benefits of AGI. Weaker systems will have already captured most of the value by the time “AGI” is achieved.

4

u/red75prime Jan 04 '25 edited Jan 04 '25

It turns out that you can capture nearly all of the value it was going to produce with much simpler robots that are easier to build, and that’s going to happen long before the Universal Robot is invented.

I don't get your argument. Those specialized robots are designed for specific purposes by engineers who comprise around 0.1% of the working population (the number might be a bit off). The "universal robot" cuts off this bottleneck. The same goes for AGI. Even if recursive self-improvement is bottlenecked by physical/algorithmic barriers, horizontal scaling of AGIs at some point will dominate growth of civilizational intellectual capacity (the brain cannot be scaled beyond 20 watts/20 years to mature in the medium-term perspective).

1

u/ravixp Jan 05 '25

Yeah, but it’s a power law thing: you capture most of the value by automating the most valuable tasks, which are the ones you’d automate first. We can already see this trend playing out with existing AI systems, where they’re useful for marginal tasks that weren’t valuable enough to hire a human for, which has limited the overall economic impact of AI so far.

2

u/Annapurna__ Jan 04 '25

I agree with you, which is why AGI is such a problematic term.

To me the significant milestone will be ASI, at which then everything gets weird assuming it somehow permeates society.

Everything else is just gradual advancement. That being said, I found this post to be a good thought exercise for a near future.

1

u/ScaryMage Jan 05 '25

What happens when governance at various levels is delegated to agentic superintelligence, though?

This is what separates AGI/ASI from oil — it seems unlikely that we invent ASI, and also not deploy it for governance.

1

u/BassoeG Jan 06 '25

Aligned AGI and superintelligence does not equal utopia. You are merely ensuring the most powerful technology in human history is reliably controllable for the actors that will be most afflicted by the intelligence curse.

Ironically Misaligned AGI might fix this. Assuming the anti-open-source crowd actually believe their own claims and aren't just the hired propagandists of their corporate masters and said claims are right, that's MAD deterrent. So long as paying a BGI is cheaper than dealing with the consequences of terrorism with open-sourced protein-folding models for building bioweapons and nanotechnology in the hands of desperate people who know their Destruction is Assured anyway if they're just left to starve as economically redundant and intend to make it Mutual...

-7

u/angrynoah Jan 04 '25

Once AI systems that are better, cheaper, faster, and more reliable than humans at most economic activity are widely available,

The sun will incinerate the earth before that happens.