33
u/AdAnnual5736 1d ago
I always think about how quickly AlphaGo went from “weak professional” and beating the European champion to beating Lee Sedol. It’s what I think of any time someone says the last 10% of the way to human-level AGI will be the hardest.
30
u/IronPheasant 1d ago
There's so much coping going around, yeah. The 'last 10%' will be the easiest, since by then the network will have enough domain optimizers to finish creating itself. It's the tedious human feedback during the bootstrapping that's the hard part.
Well, that and the hardware to run the thing on. I'm pretty sure the '100,000 GB200's' datacenters this year will be comparable to the human brain in network size, and millions of times faster when it comes to speed.
Things are gonna snowball hella fast. Maybe not 'fast' to those who want everything to happen tomorrow, but it's insane to those of us who were amazed when StackGAN released ten years ago. I knew it meant large gains were coming, but even I had vastly underestimated how large and fast they would be. I've endeavored to try to be less wrong since then, and pretty much only pay attention to scale these days..
3
u/Separate_Lock_9005 8h ago
we need a few more OOM's before we get to human brain size. Believe it or not, we have a lot of neurons and connections.
-4
u/dynamite-ready 22h ago
On the other hand, self-driving cars seem to be 5+ years overdue at this point. But I'm also wary.
8
2
u/AdAnnual5736 10h ago
I’m not sure why you’re being downvoted for this — you do raise a good point that self driving cars aren’t where we’d expect them to be given how quickly everything else has advanced.
47
u/Arcosim 1d ago
This guy at least managed to win one of the three matches, which is a massive achievement all in its own.
57
u/REOreddit 1d ago
There were five matches and he won the fourth one. I watched the documentary not long ago, and I think even the guys from the Deepmind team seemed to be genuinely happy for him that it was 4-1 instead of 5-0.
10
u/yaosio 1d ago
Now Go is in the same position as Chess where the best models can't be beaten by a human. It's kind of interesting to think that so much time and effort was put into it, so many arguments about when computer Chess or Go could beat masters, and now people can run impossible to beat software on their home PCs.
25
u/Hot-Problem2436 1d ago
I remember this day. It was the last day that I played a serious game of Go and the first day I started learning about machine learning. I am now sad for two reasons.
1
u/hiphopapotamus 9h ago
Same here honestly, literally. I now work in AI and I’ve still not played a game of go since this match ended.
1
u/Hot-Problem2436 9h ago
I was a Kiseido regular for 10 years...now I just play teaching games IRL to people who want to learn the game. What a bummer.
Now I build AI tools for the government. An even bigger bummer.
22
u/magicmulder 1d ago
Go was mostly a big delusion. People thought humans had understood the game and a machine never could, until a machine showed them it was the other way around.
8
u/Unique-Particular936 Intelligence has no moat 15h ago
Never bet too much on forever beating machines over a 19x19 integer grid of 0s, 1s, and 2s.
3
6
u/despotes 1d ago
Quite on point, a lot of people think they are unreplaceable or that AI won't improve.
3
1
u/Weekly-Ad9002 ▪️AGI 2027 5h ago
AlphaGo beats Sedol 4-1. AlphaZero beats AlphaGo 100-0. No go player would ever play AlphaZero now. Same thing now but instead of specific domain, make it general intelligence. People are not ready. Least of all the non-tech non-engineering folks who really don't know what's coming.
•
-1
u/NyriasNeo 23h ago
"I thought AlphaGo was based on probability calculation, and that it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative. This move was really creative and beautiful"
What he does not understand is that creativity can comes from random fluctuation and the recognition of what is good. If a million moneys typing randomly in a million years, Shakespeare will emerge as long as there is someone to recognize it. We are to the point that computationally, we can do something similar. (The actual algorithm is more efficient than just random generation + evaluation, but the DQN training does start with random trials before getting into more "directed" exploration".)
1
u/GreatBigSmall 16h ago
That's a common misconception. Just because something is random and infinite doesn't mean it contains all knowledge of the world.
For a simple example note that there are infinite numbers between 0 and 1 and none of them are 2.
1
u/nextnode 14h ago edited 14h ago
You point out a common misconception but in this case, they refer to a domain which we already recognize must contain undiscovered creative works and their process finds any such work in expected finite time. (or guaranteed finite, with a small adjustment)
The misconception that you refer to is more about that just because the set is infinite doesn't mean it contains everything.
While this is, if a countable set contains the thing you're looking for, it can be found in finite time.
The random fluctuation part they mentioned is also not needed and rather make the argument in this case weaker.
-19
u/lamJohnTravolta 1d ago
I need more comment karma to post a fucking weird screnshot I got from Gemini please upvote this comment so I can post
125
u/DISSthenicesven 1d ago
"I thought AlphaGo was based on probability calculation, and that it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative. This move was really creative and beautiful" - Lee Sedol