@curious_straight_ca's banner p

curious_straight_ca


				

				

				
1 follower   follows 0 users  
joined 2022 November 13 09:38:42 UTC

				

User ID: 1845

curious_straight_ca


				
				
				

				
1 follower   follows 0 users   joined 2022 November 13 09:38:42 UTC

					

No bio...


					

User ID: 1845

There's a clear difference between him and Tucker Carlson, come on. His social circle on twitter is the 'dissident right'

The thing I am defending is that 'bog standard conservatives' mostly don't get banned including when they argue against gay/trans/abortion. This is true on twitter, on discord, etc. The Distributist is not a bog standard conservative, he is much more """""dissident-right""""". I agree the far right gets censored a lot and it's bad!

I was responding to 'bog-standard conservative thought', earlier. There are plenty of small servers with open nazis who don't get banned on discord, and bigger servers with open nazis that just recreate every so often. Yeah, far-leftists get banned a lot less on discord.

I've seen a trad-cath guy try and start no less than 3 Discord servers until he learned that is not a good place for him to organize.

What kind of stuff was being posted?

elontwitter bans leftist gimmick accounts for dumb reasons too? Pre-elon twitter definitely banned rightists more, but it's getting a lot closer recently. Discord doesn't censor conservatives at all afaik? As opposed to the far-right who they do a bit

the specific thing i was thinking of was big politics discord servers. Twitter technically qualifies I think, there are a lot of "debates" in the comments of big posts.

Okay, to back up a bit: I'm arguing that today's LLMs couldn't be agentic even if they wanted to be, so their behavior shouldn't "lower one's p(doom)". Future LLMs (or not-exactly-LLM models), being much more capable and more agentic, could easily just have different properties.

They can obviously sketch workable plans

They can write things that sound like workable plans, but they can't, when given LangChain-style abilities, "execute on them" in the way that even moderately intelligent humans can. Like, you can't currently replace a median IQ employe directly with huge context window GPT4, it's not even close. You can often, like, try and chop up that employe's tasks into a bunch of small blocks that GPT-4 can do individually and have a smaller number of employees supervise it! But the human's still acting as an agent in a way that GPT-4 isn't.

And there's no good reason to suspect this stops working at ≤human level.

I think the alignment concern is something like - once the agents are complex enough to act on plans, that complexity also affects how they motivate and generate those plans, and then you might get misalignment.

You must get that such feats are rare even within humans, and people capable of pulling them off are enormous outliers?

I was thinking of 'guy who works his way to the top of a car dealership', not Altman, lol. AI models can't yet do the kind of long-term planning or value seeking that 85 IQ humans can.

For most cognitive tasks, GPT-4 beats the average human, which is something I'm more than comfortable calling human level AI!

Most small-scale cognitive tasks! If this was true, we'd have directly replaced the bottom 20% of white-collar jobs with GPT-4. This hasn't happened! Instead, tasks are adapted to GPT-4's significant limitations, with humans to support.

(again, i'm talking about current capabilities, not implying limits to future capabilities)

The fact that you can even have the absence of those properties in something smarter than the median human is reassuring enough by itself

I don't think it's worrying that it can't make plans against us if it can't make plans for us either! Like, there's no plausible way for something that can't competently execute on complicated plans to have an incentive to take 'unaligned' actions. Even if it happens to try a thing that's slightly in the direction of a misaligned plan, it'll just fail, and learn not to do that. So I don't think it's comforting that it doesn't.

(i'm misusing yudconcepts I don't exactly agree with here, but the point is mostly correct)

If I had to guesstimate GPT-4's IQ based off my experience with it, I'd say it's about 120, which is perfectly respectable if not groundbreaking

I don't think it's anywhere close to the broad capabilities of a 120 IQ human, and still isn't that close to 100IQ (at the moment, again, idk about how quickly it'll close, could be fast!). It can do a lot of the things a 120 IQ human can, but it doesn't generalize as well as a 120IQ human does. This isn't just a 'context window limitation' (and we have longer context windows now, it hasn't solved the problem!), what humans are doing is just more complicated!

It's trivial to convert an Oracle into an Agent, all you have to do is tell it predict how an Agent would act, and then figure out how to convert that into actions. Given that there's no bright line between words and code.. Besides, I'm sure you've read Gwern on Tool AI vs Agentic AI.

Right, and my point is that current AI is unintelligent that this doesn't work! They can't predict how agents act effectively enough to be at all useful agents. So the safety of current oracle AIs doesn't tell us much about whether future agent AIs will be safe.

I actually think that future less-but-still-subhuman agent AIs will seem to be safe in Yud's sense, though. No idea what'll happen at human-level, then at superhuman they'll become "misaligned" relatively quickly, but [digression]

I personally expected, around 2021, that commensurate with my p(doom) of 70%, even getting a safe and largely harmless human level AI would be difficult

GPT-4 isn't human level though! It can't, like, play corporate politics and come out on top, and then manipulate the corporation to serve some other set of values. So the fact that it hasn't done that isn't evidence that it won't.

I also expected (implicitly) that if something along the lines of RLHF were to be tried, it wouldn't work, or it would lead to misaligned agents only pretending to go along. Both claims seem false to my satisfaction.

Right, but they're "going along" with, mostly, saying the right words. There's not the intelligence potential for anything like deep deceptiveness or instrumental convergence or meta-reflection or discovering deeper Laws of Rationality or whatever it is yud's pondering.

I think what happened is the single tilde matches with the tilde before the 0, but instead of strikethroughing the whole block it only goes to the end of the line

test test

test test

test ~ test

The reason my p(doom) fell so hard is because of what it was load-bearing on, mostly Yudkowsky's earlier works claiming that human values are fragile and immensely unlikely to be successfully engineered into an AI, such that a sufficiently powerful one will inevitably start acting contrary to our interests.

GPT-4 isn't doing things like - creating its own large-scale plans or discerning moral values or considering moral dilemmas where it will participate in long-term social games - though. All this proves is, in Yud's strange terms, that subhuman AI can be a safe "oracle". I don't think he'd have disagreed with that in 2010.

I don't know of any reason to assume that we're particularly far from having economically useful autonomous agents, my understanding is that current context windows are insufficient for the task

To clarify, I'm not saying it's not coming, I'm saying we don't have access to them at this exact moment, and the GPT-4 "agents" have so far failed to be particularly useful. And agents doing complicated large-scale things is when the alignment stuff is supposed to become an issue. So it's not much reason to believe ais will be safer.

Not that I agree with the way Yud describes AI risk, I think he's wrong in a few ways, but that's whole nother thing.

yeah i agree that's bad, but that's not the central example of conservative thought. there are plenty of places 100x larger than us where conservatives and liberals debate, so op's reason isn't correct

I don't think it makes any sense to ""update"" on how corrigible or agentic LLMs are? They're still, like, small children, and they can't even really execute on complex 'agentic' plans at all (as demonstrated by the failure of the whole LLM agents thing).

2s get a lot of ideas from 1s and so do their employees/followers, so 1s still have significant power (in a 'your actions have significant impact on the future' sense, if not a 'you can order a bunch of people around' one) in practice. Something something so-called practical men are slaves of long-dead philosophers. You say 'impact on the discourse', the discourse feels like it's dominated by people who've been around for a while, whether they're safetyist or not.

but in the end Silicon Valley has always bowed down to Washington, and to some extent to Wall Street.

Yeah, but there are quite a few EAs in both places!

I wonder what absolute morality looks like for AGIs and their relationships with the material other AGIs, as opposed to just humans. That seems as, if not more, important than 'how will AIs relate to us', in the same way that how we relate to animals is of secondary importance to us.

That probably just means 'annoyingly recruiting for a pointless, antagonism-causing culture-war cause', as opposed to a more general meaning. I half-remember there being posts with calls to action to write to your representative about stuff like YIMBYism that weren't taken down because nobody cared.

You're the guy who keeps getting banned on a new account, right? Not confident, but it fits.

http://datasecretslox.com, a forum with various old ssc comments posters

There are a few of those, but they're still <1/10th of all toplevels.

Beliefs, ideologies, etc have causes. There are physical, mechanical, causal reasons why all the brightest people adopted progressivism over the past millenium. It's not just a tiny coincidence that magnified by building on itself! One can hope the Spirit of the Times is reactionary, that a quadrillion poorly understood contingencies lines up to magnify your political actions rather than retard them. But the march of technology, the advance of AI - these doesn't feel particularly likely to aid conservatism or 'retvrn' to me

I actually found this year's guest reviews to be worse than in the past, especially Jane Jacobs, The Educated Mind, Man's Search for Meaning. A few of the other ones were good. I haven't reduced my consumption of Scott content at all, but I find the 'dull' posts about AI, gay younger brothers, pharmacology, etc interesting in itself though, so that might be the difference.

This seems more like a sneer than a contribution. Themotte's opinions, other than the few white nationalists, don't get you banned on most large Western websites, e.g. even pre-Musk most of the HBD bloggers had unsuspended twitter accounts.

Is DSL really way more to the right of this place? That wasn't my impression

The SSC diaspora has fractured into a number of different communities with different leans - the ACX discord leans left iirc, we lean right, etc