FeepingCreature
No bio...
User ID: 311
To be fair, if you can pick the reference class, you can do anything.
Firstly, I think it's likely that the first AI that we build that attains "human-level reasoning" (in whatever rough measure of "reasoning per unit of time") will be pretty close to at least a local maximum of compute capabilities, and won't easily be scaled up by a factor of 1000 over night.
Why? None of the current neural networks represent a maximum of compute for their host company, or even within an oom.
To be fair, humans are already at the end of a bootstrap sigmoid, and that was via a very long feedback loop.
Right, but corporations that are staffed by humans aren't smarter than humans and can't become smarter than humans. "Being a corporation" doesn't remove the scaling limits that constrain the human brain in specific. If you remove that limiting factor, then yes, corporations are scary too.
Yes, that is what I believe. And yes, when I asked this, I double-checked both comments if they fit the justification. I went up this hill with deliberate intent. :)
I'm not saying it requires zero evidence, I'm saying if you cite that you have evidence, you have a particular obligation to provide it.
Everyone should always have evidence, but saying something without evidence is just having an unjustified opinion. Saying you have evidence and then not providing it is actively misleading.
This stat is misleading because it means a few top band high earners (and so high tax contributors) can "pay for" a load of useless layabouts in this statistic.
How is that misleading? Admittedly this suggests a third option of "only accept immigrants likely to contribute lots of taxes", but it's surely relevant to the question that between "current immigration" and "no immigration", the "current immigration" option still leads to higher sum tax revenue.
Do immigrants actually support immigration? My intuition would be that immigrants are for it to the degree that they're in the social sphere that profits from immigration and start being against it as they accumulate wealth.
you chose to paraphrase similar statements as different in order to justify your one-sided demand for "evidence"
The two statements are:
Lockdowns made... some kind of sense in 2019/early 2020 when we had few other tools and the pandemic could have still turned out to be deadlier based on the reports coming out of China.
And
lockdowns never made sense at any point w/re to covid; there was zero scientific evidence to support them, lots of historical evidence against them
The second of those marshals "evidence" to support itself; the first does not. Claiming evidence and not providing any is worse than not claiming any to begin with.
Looking forward to your post!
There's a difference between "I think lockdowns sort of made sense" and "science says that lockdowns don't make sense." (It's that one of those gets called out on not citing sources.)
If you say "there is evidence", you're gonna have to expect people saying "well show it then."
Can you actually cite the evidence against, please?
I think the arguments about insect welfare at least deserve consideration.
Overhyped, like all Facebook models. An interesting and perhaps unnerving milestone in marrying LLMs to game agents, but it's narrowly specialized and it only works for blitz – for now.
"Nah, that dog's not so smart. I've beaten him three games out of five!"
What timeframe would you have suggested in advance for an AI using natural language to form and break dynamic alliances at a superhuman level?
The SC and DotA AIs were a massive disappointment. I thought they'd push AI research forward by allowing agents to speculate about other agents with hidden intentions, really focus in on the fog-of-war problem. Instead, they just threw a lot of RL at the problem, achieved some technical success, declared victory and immediately left the field. Even the best DotA AI still couldn't handle cloaked enemies at all.
My version of this take: being safe feels good, but being demonstrably right about the threat feels useful and actionable. GP was trying to make their friend feel better, but at the level of the "group strategy module", they were raining on their parade. Why were they denying the group rhetorical ammunition?
I think it is true as a general statement. In the battle between the subconscious and reason, reason is going to lose every single time.
Well that just intuitively sounds wrong to me. /s
It sounds like you're trying to do an end run around "gay bad, trans bad" by assuming it as given, then arguing "it's abuse because it leads to gay/trans". But this entirely trades on the negative connotations of "abusive", not of "gay/trans".
I mean, yes. Sometimes we commit suicide, sometimes cells commit cancer. I didn't say it was good, I'm saying it's not unusual. It's the sort of thing the brain does, just by a longer path.
This is akin to saying "FPTP cannot fail, it can only be failed." A system that can only work if the populace ignores strategic considerations, and otherwise outsizedly rewards the people who actually do vote strategically, is already broken.
Brains already regulate their neurochemistry. How is this not just more of brains regulating their neurochemistry, via a much longer control loop?
To be clear, I can see the pragmatic argument of "your biology is a lot better at it than your cognition". I don't see where the aesthetic argument comes from.
Seems a lot to pull from a single correlation.
My expectation is still that early takeoff is so powerful (because the overhang is so large) that the multipolar scenario basically cannot happen. Whoever goes first implements the pivotal action anyways, and successfully. The only positive outcome is from lucking into a compatible sovereign.
I presume this is our primary disagreement.
This is in line with weapons race against (speculative) China AGI threat, and leads us straight into the Singleton's maw. Buh-but it'll be a good Singleton, amirite comrade?
Does anyone involved actually believe this? The whole point of the idea of burning all the GPUs is that we're currently facing a smorgasbord of bad singletons and the only thing we can do is sabotaging the slot machine so we can keep spinning it until we figure out which option gives us a payout rather than the current expected outcome which is that a hand with a knife comes out and shivs us in the gut.
Who actually thinks current AGI projects lead to aligned superintelligence? Name names, so that Yud can go yell at them some more.
(My own pet theory is that we'll get a good singleton by prompt engineering GPT-4. I believe this primarily because it will be hilarious and deeply, deeply embarrassing for the species.)
edit: The sense I get as a singularitarian is that they don't disagree with the idea that a one-party/one-world totalitarian state is the most dangerous thing imaginable, but rather it's that one-world totalitarianism via singleton is a black hole that we're falling into at astonishing speed, and that if we win, it will be by choosing some sort of trajectory where the place we fall into it is for some reason the one place that humans can survive in, and that we have no idea how to do that, and that most of the engines on our spaceship, most of the incentive gradients, are pointed down into it at the moment. "Let's not do that" would be great if, you know, we could.
Elon "FSD in 2018" Musk didn't steal anyone's money to get there?

Okay, but what it's using von Neumann for in this instance is inputting numbers on a spreadsheet. There's not an unlimited number of positions at the company where it's getting full use of von Neumann's intelligence, and moving the best people to those positions is also limited by competence. And it only gets the one von Neumann, so while in principle it can reach von Neumann tier competence in any capability, it cannot necessarily reach von Neumann tier competence in every capability. And it still can't inherently beat von Neumann's peak output in any particular skill he possesses.
I think the better framing here is that you can use cooperative ventures to remove limits and drawbacks that prevent people from reaching their peak performance. But I don't think you can get the aggregate to exceed the individual's performance that way.
More options
Context Copy link