@SnapDragon's banner p

SnapDragon


				

				

				
0 followers   follows 0 users  
joined 2022 October 10 20:44:11 UTC
Verified Email

				

User ID: 1550

SnapDragon


				
				
				

				
0 followers   follows 0 users   joined 2022 October 10 20:44:11 UTC

					

No bio...


					

User ID: 1550

Verified Email

Remember Scott's post about how 2100 "isn't a real year"? You're making that mistake, times a thousand. The question of "based on physics, how many consciousnesses can our civilization support" has almost nothing to do with our current existence; any answer, and any pressing need to answer, is way beyond the future event horizon where the world will be unrecognizable to us.

What you're doing now is the equivalent of ancient tribes sitting by their campfire, taking a break from their stories about how the Moon Goddess hides from the Sun God, to talk about how the Fed should optimally set interest rates to avoid a recession. It's beyond pointless.

I'm assuming you didn't watch the GPT-4 announcement video, where one of the demos featured it doing exactly that: reading the tax code, answering a technical question about it, then actually computing how much tax a couple owed. I imagine you'll still want to check its work, but (unless you want to argue the demo was faked) GPT-4 is significantly better than ChatGPT at math. Your intuition about the limits of AI is 4 months old, which in 2023-AI-timescale terms is basically forever. :)

VERY strong disagree. You're so badly wrong on this that I half suspect that when the robots start knocking on your door to take you to the CPU mines, you'll still be arguing "but but but you haven't solved the Riemann Hypothesis yet!" Back in the distant past of, oh, the 2010s, we used to wonder if the insanely hard task of making an AI as smart as "your average Redditor" would be attainable by 2050. So that's definitely not the own you think it is.

We've spent decades talking to trained parrots and thinking that was the best we could hope for, and now we suddenly have programs with genuine, unfakeable human-level understanding of language. I've been using ChatGPT to help me with work, discussing bugs and code with it in plain English just like a fellow programmer. If that's not a "fundamental change", what in the world would qualify? The fact that there are still a few kinds of intellectual task left that it can't do doesn't make it less shocking that we're now in a post-Turing Test world.

I don't ask it to write code then plunk it into my projects - I agree that it sometimes gets things wrong there (although you can point out errors and it'll acknowledge and often fix them). What I use it for is to talk through my problems (it's not a rubber duck, because it's replying with knowledge I didn't have before). It uses its vast breadth of knowledge to help me with things like syntax, library functions, simplifying code, debugging a compile error, etc. ChatGPT is bit rougher, but Bing AI has even been smart enough to challenge me when I'm giving it mistaken information, asking follow-up questions that get me to the root of my problem (like a coworker would).

So, I don't want really want to argue the Chinese Room philosophy of when language understanding starts to "count". All I know is what my lying eyes are telling me: I'm now conversing with my computer in completely natural language, and it hasn't once failed to understand me. (Its reply hasn't always been helpful or right, but it's always made sense.) It's important to resist the cynicism of finding ways to break the LLM and going "oh, it's lame after all". Even if LLMs somehow never get any smarter, even if they're not on the critical path to AGI, just the capabilities we've already seen are enough for them to change the world.

Here, since you asked for specifics, let me recount one of the most impressive conversations I had with Bing AI. (Unfortunately it doesn't seem to save chat history, so this is just paraphrasing from memory. I know that's a little less impressive, sorry.)

Me: In C++ I want to write a memoized function in a concise way; I want to check and declare a reference to the value in a map in one single call so I can return it. Is this possible?

Bing: Yes, you can do this. (Writes out some template code for a memoized function with several map calls, i.e. an imperfect solution).

Me: I'd like to avoid the multiple map calls, maybe using map::insert somehow. Can I do this?

Bing: Sure! (Fixes the code so it uses map::insert, then binds a reference to it->second, so there's only one call).

Me: Hmm, that matches what I've been trying, but it hasn't been compiling. It's complaining about binding the reference to an RValue.

Bing: (explanation of what binding the reference to an RValue means, which I already knew.)

Me: Yes, but shouldn't it->second be an LValue here? (I give my snippet of code.)

Bing: Hmm, yes, it should be. Can you tell me your compile error?

Me: (Posts compile error.)

Bing: You are right that this is an RValue compile error, which is strange because as you said it->second should be an LValue. Can you show me the declaration of your map?

(Now, checking, I realize that I declared the map with an incorrect value type and this was just C++ giving a typically unhelpful compile error.)

I want to emphasize that it wasn't an all-knowing oracle, and back-and-forth was required. But this conversation is very close to what I'd get if I'd asked a coworker for help. (Well, except that Bing is happy to constantly write out full code snippets and we humans are too lazy!)

Sounds like some sort of insanely well read but very dim intern that you can always ask to do anything through a computer or something. Very weird but probably very useful in a Jarvis-from-Iron-Man sort of way.

Yeah, that's a pretty good description of it! I'm definitely still the brains of the outfit. But it's getting closer to the "Hollywood UI" ideal where you use your computer by talking to it rather than by remembering the correct syntax of a Unix command.

I'm concerned that this tech is still very much on lock in from giant corporations. Microsoft's Office integrations all seem to rely on spying on everything you do and those training costs are still too prohibitive for FOSS to remain competitive. I sure hope that changes.

No argument here. I personally trust Microsoft a little more than Google, but still, I'm really hoping this tech gets democratized sooner rather than later. (I've heard Alpaca, which is small enough to run on a PC, is pretty good, but "pretty good" might not cut it.)

I've lost pretty much all respect for Yudkowsky over the years as he's progressed from writing some fun power-fantasy-for-rationalists fiction to being basically a cult leader. People seem to credit him for inventing rationality and AI safety, and to both of those I can only say "huh?". He has arguably named a few known fallacies better than people who came before him, which isn't nothing, but it's sure not "inventing rationality". And in his execrable April Fool's post he actually, truly, seriously claimed to have come up with the idea for AI safety all on his own with no inputs, as if it wasn't a well-trodden sci-fi trope dating from before he was born! Good lord.

I'm embarrassed to admit, at this point, that I donated a reasonable amount of money to MIRI in the past. Why do we spend so much of our time giving resources and attention to a "rationalist" who doesn't even practice rationalism's most basic virtues - intellectual humility and making testable predictions? And now he's threatening to be a spokesman for the AI safety crowd in the mainstream press! If that happens, there's pretty much no upside. Normies may not understand instrumental goals, orthogonality, or mesaoptimizers, but they sure do know how to ignore the frothy-mouthed madman yelling about the world ending from the street corner.

I'm perfectly willing to listen to an argument that AI safety is an important field that we are not treating seriously enough. I'm willing to listen to the argument of the people who signed the recent AI-pause letter, though I don't agree with them. But EY is at best just wasting our time with delusionally over-confident claims. I really hope rationality can outgrow (and start ignoring) him. (...am I being part of the problem by spending three paragraphs talking about him? Sigh.)

I want to be clear that this is coming from somebody who once liked his writings. I didn't worship him. I didn't learn much from him. But he has always had a fun and unique writing style.

But believe me, there's no confusion here. Capital-R Rationality may be something that crystalized around LessWrong and the Sequences, but the concepts of rationality are hardly new; we're building on a legacy of humans struggling to explain the Universe that has been built over thousands of years. Yudkowsky wrote some entertaining essays, some of which are insightful (and some of which are silly, particularly when he veers into fields of science he doesn't know well). You could credit him with collecting and indexing a few good ideas. But he's very bad at practicing what he preaches - Scott, for instance, is far better at actually making and testing predictions than Yudkowsky. I suppose cult leaders don't usually lower themselves to the level of scrubbing the temple floor.

As for AI Safety, no. No, no, no. There's absolutely no defense for his egotistical claim in the April Fool's post. Futurists have been discussing AI safety since at least Asimov's Three Laws. What do you think AI researchers did before him, shrug and go "hmm, I wonder if making this neural net behave is something I should study sometime"? Maybe I can trace one particular flavour of the "edifice" to his writings - superintelligence-goes-FOOM-breaks-out-of-black-box-and-builds-nanotech-in-a-bio-lab - but AI safety as a whole would still exist and look pretty much the same without him. Arguably, it would be healthier, with the many people with different intelligent perspectives not being drowned out by his singular view and stubborn insistence that he knows the unknowable future.

BTW, if you want to read a good example of pre-Yudkowsky rationality, I recommend The Demon-Haunted World. Carl Sagan did a lot to help me learn how to think clearly, in my formative years.

Yes, I'm really glad to see someone else point this out! One thing that's interesting about LLMs is that there's literally no way for them to pause and consider anything - they do the same calculations and output words at exactly the same rate no matter how easy or hard a question you ask them. If a human is shown a math puzzle on a flashcard and is forced to respond immediately, the human generally wouldn't do well either. I do like the idea of training these models to have some "private" thoughts (which the devs would still be able to see, but which wouldn't count as output) so they can mull over a tough problem, just like how my inner monologue works.

Experimenting with giving ChatGPT-4 a more structured memory is easy enough to do that individuals are trying it out: https://youtube.com/watch?v=YXQ6OKSvzfc I find his estimate of AGI-in-18-months a little optimistic, but I can't completely rule out the possibility that the "hard part" of AGI is already present in these LLMs and the remainder is just giving them a few more cognitive tools. We're already so far down the rabbit hole.

The idea of accepting election results was uncontroversial on both sides until Trump talked. The benefits of polarization.

This seems like a strange claim to me. Would you classify the two-year investigation of "Russian interference" by a Special Prosecutor as "accepting election results"? "Not My President"? Hillary - the actual losing candidate - calling Trump an illegitimate President? Sadly, the civilized norms had already been well eroded by 2020.

I am already getting tremendous value out of GPT4 in my work as a programmer. Even if the technology stops here, it will change my life. I have still never ridden in an AV. I reject your analogy, and your conclusion, completely.

Yudkowsky's ideas are repulsive because the "father of rationality" isn't applying any rationality at all. He claims absolute certainty over an unknowable domain. He makes no testable predictions. He never updates his stance based on new information (as if Yud circa 2013 already knew exactly what 2023 AI would look like, but didn't deign to tell us). Is there a single example of Yudkowsky admitting he got something wrong about AI safety (except in the thousand-Stalins sense of "things are even worse than I thought")?

In a post-April-Fool's-post world I have no idea why people still listen to this guy.

Maybe IP can be justified because it brings value by incentivizing creation?

Um, yes? This is literally the entire and only reason IP exists, so the fact that you have it as one minor side point in your post suggests you've never actually thought seriously about this. A world without IP is a world without professional entertainment, software, or (non-academic) research. Capitalism doesn't deny you the free stuff you feel you richly deserve... it enables its existence in the first place.

Yup, and "he" was also commonly used as a gender-neutral pronoun. But this subtle linguistic point has the unfortunate quality of looking problematic, so it attracts ignorant activists. I haven't seen the word "niggardly" used in a long time, either - I suspect that even people who know what it means self-censor, because it's just not worth attracting that kind of attention when it's low-cost to just use a different word. Thus language drifts on...

Really? Name the centuries-old historical counterpart to movies on DVD, music on CD, videogames, software suites, drug companies, ... I could go on. Sure, people used to go to live plays and concerts. Extremely rich patrons used to personally fund the top 0.1% of scientists and musicians. It was not the same.

Yeah, IP law is almost certainly not perfectly optimized for its intended function. Like so many other laws, it's a mess. It doesn't help that we allow corporations like Disney to have outsized influence on the legal process. If copyrights lasted for a flat 20 years (like patents) I think it'd still do fine at incentivizing creation. (And, more generally, if we had a political system that incentivized simple and straightforward laws, that'd be nice too...)

Good points. I don't think we really disagree, then. I happen to really enjoy entertainment that takes hundreds of people to produce (AAA movies and games), and there just wouldn't really be any way for those to exist without IP. But music and fiction aren't like that, and it would indeed be interesting if there were no limits on fanfic. (Would people still gravitate to the original author - or their descendants - to add the "canonical" imprimatur to particular stories, a la Cursed Child? Or would the "oral history" aspect win out? I wonder.)

"Long ago"? ChatGPT is 5 months old and GPT4 is 3 months old. We're not talking about a technology long past maturity, here. There's plenty of room to experiment with how to get better results via prompting.

Personally, I use GPT4 a lot for my programming work (both coding and design), and it still gets things wrong and occasionally hallucinates, but it's definitely far better than GPT3.5. Also, as mentioned above, GPT4 can often correct itself. In fact, I've had cases where it says something I don't quite understand, I ask for more details, and it freely admits that it was wrong ("apologies for the confusion"). That's not perfect, but still better than if it doubles down and continues to insist on something false.

I'm still getting the hang of it, like everyone else. But an oracle whose work I need to check is still a huge productivity boon for me. I wouldn't be surprised if the same is true in the medical industry.

Technically Bing was using it before then, but good point. It's insane how fast things are progressing.

Eh. I gave him some respect back when he was simply arguing that timelines could be short and the consequences of being wrong could be disastrous, so we should be spending more resources on alignment. This was a correct if not particularly hard argument to make (note that he certainly was not the one who invented AI Safety, despite his hallucinatory claim in "List of Lethalities"), but he did a good job popularizing it.

Then he wrote his April Fool's post and it's all been downhill from here. Now he's an utter embarrassment, and frankly I try my best not to talk about him for the same reason I'd prefer that media outlets stop naming school shooters. The less exposure he gets, the better off we all are.

BTW, as for his "conceptualization of intelligence", it went beyond the tautological "generalized reasoning power" that is, um, kind of the definition. He strongly pushed the Orthogonality Hypothesis (one layer of the tower of assumptions his vision of the future is based around), which is that the space of possible intelligences is vast and AGIs are likely to be completely alien to us, with no hope of mutual understanding. Which is at least a non-trivial claim, but is not doing so hot in the age of LLMs.

Does anyone around him tell him (in a friendly way) to maybe start practicing some Methods of Rationality? Question a couple of his assumptions, be amenable to updating based on new evidence? Because that would also be nice.

Annoyingly, this paper references the Doomsday Argument, which is completely wrong (it does mention some of the arguments against it, but that's like mentioning the Flat Earth Hypothesis and then saying "some people disagree"). I went on a longer rant about the Doomsday Argument here if you're curious.

The central question is interesting, though. Basically, if you believe (sigh) Yudkowsky, then any civilization almost certainly turns into a Universe-devouring paperclip maximizer, taking control of everything in its future light cone. This is different than the normal Great Filter idea, which would (perhaps) destroy civilizations without propagating outwards. I was originally going to post that the Fermi paradox is thus (weak) evidence against Yuddism, because the fact that we're not dead yet means either a) civilizations are very rare, or b) Yudkowsky is wrong. So if you find evidence that civilizations should be more common, that's also evidence against Yuddism.

But on second read, I realized that I may be wrong about this if you apply the anthropic argument. If Yuddism is true, then only civilizations that are very early to develop in their region of the Universe will exist. Being in a privileged position, they'll see a Universe that is less populated than they'd expect. This means that evidence that civilizations should be more common is actually evidence FOR Yuddism.

Kind of funny that the anthropic argument flips this prediction on its head. I'm probably still getting something subtly wrong here. :)

I very much agree with his assertion in the second article that analysts often try to avoid mentioning (or even thinking about) tradeoffs in political discussions, even that's almost always how the real world works. Being honest about tradeoffs is a good strategy for correctly comprehending the world, but not for "winning" arguments.

Somewhat related to the civil rights violations of prisoners, I remember the arguments about Guantanamo back in the War on Terror days. It was common to hear politicians and pundits - in full seriousness - make the claim that "torture doesn't work anyway." I hated the fact that, post-9/11, it was politically impossible to say "torture is against our values, so we won't do it even though this makes our anti-terror efforts less effective and costs lives." Despite the fact that (I suspect) most people would agree privately with this statement...