Do you really think that? From what I’ve read on lesswrong, posters actually put forth arguments.
I don’t think Eliezer is a conspiracy theorist …
You caught me! My primary aim was not to persuade Yud, but to talk with y'all. And I guessed (rightly or wrongly) that other people around Yud have been telling him the same thing for years.
I just got done listening to Eliezer Yudkowski on EconTalk (https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/).
I say this as someone who's mostly convinced of Big Yud's doomerism: Good lord, what a train wreck of a conversation. I'll save you the bother of listening to it -- Russ Roberts starts by asking a fairly softball question of (paraphrasing) "Why do you think the AIs will kill all of humanity?" And Yudkowski responds by asking Roberts "Explain why you think they won't, and I'll poke your argument until it falls apart." Russ didn't really give strong arguments, and the rest of the interview repeated this pattern a couple times. THIS IS NOT THE WAY HUMANS HAVE CONVERSATIONS! Your goal was not logically demolish Russ Roberts' faulty thinking, but to use Roberts as a sounding board to get your ideas to his huge audience, and you completely failed. Roberts wasn't convinced by the end, and I'm sure EY came off as a crank to anyone who was new to him.
I hope EY lurks here, or maybe someone close to him does. Here's my advice: if you want to convince people who are not already steeped in your philosophy you need to have a short explanation of your thesis that you can rattle off in about 5 minutes that doesn't use any jargon the median congresscritter doesn't already know. You should workshop it on people who don't know who you are, don't know any math or computer programming and who haven't read the Sequences, and when the next podcast host asks you why AIs will kill us all, you should be able to give a tight, logical-ish argument that gets the conversation going in a way that an audience can find interesting. 5 minutes can't cover everything so different people will poke and prod your argument in various ways, and that's when you fill in the gaps and poke holes in their thinking, something you did to great effect with Dwarkesh Patel (https://youtube.com/watch?v=41SUp-TRVlg&pp=ygUJeXVka293c2tp). That was a much better interview, mostly because Patel came in with much more knowledge and asked much better questions. I know you're probably tired of going over the same points ad nauseam, but every host will have audience members who've never heard of you or your jargon, and you have about 5 minutes to hold their interest or they'll press "next".
Is the alternative dating women who are morbidly obese?
While touching grass is good for everybody, I think this is probably a step in the wrong direction for you. Your weakness is having relationships with other humans. This plan is you running away from your fears. You're rationalizing literally going away to live by yourself in the woods.
This is only an explanation if you believe corporations weren’t doing their best to maximize profits before. Do any of these soaring profit figures factor in inflation?
I don't know.
This guy (https://news.ycombinator.com/item?id=35029766) claims to get about 4 words per second out of an A100 running the 65B model. That's a reasonable reading pace. But I'm sure there's going to be all sorts of applications for slower output of these things that no one has yet dreamt of. One thing that makes Llama interesting (in addition to being locally-runnable) is that Meta appears to have teased more usefulness per parameter -- it's comparable with competing models that have 3-4 times as many parameters. And now there's supposedly a non-public Llama that's 546 billion parameters. (I think all of these parameter numbers are coming from what can fit in a single A100 or a pod of 8x A100s). Sadly, I think there's already starting to be some significant overlap between the cognitive capabilities of the smartest language models and the least capable deciles of humans. The next ten years are going to be a wild ride for the employment landscape. For reference, vast.ai will rent you an A100 for $1.50/hr.
Sexual attraction to post-pubescent, sexually-mature minors is not a disorder and doesn’t need a name like ‘ephebophilia’. Up until extremely recently, it’s just what people called ‘normal’. No one in a professional setting would dare talk about it, but for evolutionary reasons, I wager that kind of attraction is dramatically more common than its absence.
Just don’t stare or be creepy, and no one will be able to tell whom you’re attracted to. On the off chance someone can tell, big whoop: every other male is also attracted. It’s not actually illegal or against any school policy to experience attraction, so long as you don’t act on it.
So yea, don’t let this issue keep you from being a teacher if that’s the best move for you. Just lie about whom you’re attracted to like everyone else. Judging by your username, are you autistic? This is the kind of unwritten, unspoken rule I’d expect a person on the spectrum to have an issue with.
Not really. Facebook.com 0.0.0.0 in your hosts file will clear that problem right up.
I'm sorry, I wasn't clear. The aliens we're about to meet are the AGIs.
Yea, it's definitely been a crazy couple of months.
We now have open source LLMs running on home computers approaching parity with the big corporate datacenter AIs.
Sort of. I've been playing around a with the publicly available LLMs, including the largest one, the 65 billion-parameter Llama model from Meta, and I find it's somewhere around the quality of GPT-3, nowhere near the quality of GPT-4. I'm also running it quantized to 4 bits on my CPU rather than on my GPU, so it's dogshit slow -- a word every ~2 seconds. Just enough to slake my curiosity. To run it at a conversational speed, you need a GPU with 40GB of VRAM, so you're probably looking at dropping $4,500 minimum on just the GPU, and maybe closer to $15,000 -- not exactly available to the masses. Maybe in 4 years. Moore's law is still kicking on the GPU side.
I'm not that impressed with any of the LLMs. I know it's a controversial take around here, but I don't think they're doing any reasoning at all. The reasoning is in the humans who wrote their training data, and the LLMs are doing a great job of predicting the text. You'll see it for yourself if you just play around with one enough until you see it screw up in ways that demonstrate it has no idea what it's talking about. Adding parameters and training data helps push back the boundaries of what it can do, but the fundamental issue remains. The lights are on, but nobody's home. There need to be more algorithmic insights before we get AGI.
I however think Big Yud is right in that eventually we will get to AGI, and unless we're extraordinarily conscientious, it'll kill us all. His arguments aren't rigorous to the point of mathematical certainty, but it seems like that's the default path unless there's a big as-of-yet-unpredicted happening intervenes. The future is full of such things, of course. But I don't share anybody's concern about LLMs or transformers. If any thing, all the recent hype about LLMs' shockingly good performance improves humanity's odds. But we're actually going to need world-wide agreements and to risk shedding blood and bombing defectors' data centers if we want humanity to survive our first contact with an alien species.
I say this as someone who probably agrees with you: This site is for a deeper analysis than "the opposing side is wrong". Tell us why modern gender theory is nonsense in a way that has even the thinnest sliver of a chance of persuading someone.
Yandex is censored regarding internal Russian politics, but for everything else, it's less censorious than the alternatives. This is a tip I picked up from Gwern's big page of tricks: https://gwern.net/search. Quite long, but I cannot overstress how worthwhile of a read it is for anybody who wants to do serious research online.
New Coke did it's job, which was to mask the transition from cane sugar to high fructose corn syrup ... at least that the popular conspiracy theory. https://www.snopes.com/fact-check/new-coke-fiasco/
But doesn't that money have to be spent at some point in order for the owner to derive benefit? It's taxed now or later. In the long run, it should be a wash.
I’m no doctor, but a brief search online shows that doctors routinely prescribe mifepristone for miscarriages. Maybe there are alternatives that are just as good, but I presume doctors aren’t using their second choice.
You need to bring more evidence to this disagreement. As it is, this comment is bad.
Reported for what, exactly? @Goodguy is not advocating violence, just expressing that he empathizes with (understands) the motivations of school shooters. Understanding, especially regarding unpopular opinions, is why many of us are here. If your goal is to reduce school shootings, you should be spending even more time trying to understand this failure mode of young men.
I was asked to janny a comment where all of the ancestor comments in the context were “filtered”. The comment was particularly hard for me to understand without context. Maybe let the jannies see the context of what they’re being asked to review?
The Motte does not need salacious gossip about real people or accusations without the thinnest shred of evidence. Though, I honestly can't figure out if your post breaks any rules.
Yea, that sentence was a head-scratcher.
Lex is also a fucking moron throughout the whole conversation,
Yea, this is par for the course with a Lex Fridman podcast. Practically everything he says is a non-sequitur, shallow, or saccharinely optimistic (love, beauty, barf). He gets some absolutely GREAT guests (how?), so I listen occasionally, but it's still a coin flip whether the whole 3 hours is a waste of breath. (Mostly it comes down to whether the guest has the strength guide the conversation to whatever they wanted to talk about.)
I'm waiting for the transformer model that can cut Lex's voice from his podcast.
Ah, I see you've watched the movie.
OK, fair enough, you're right. You're doing good work, and this place is better for you being a mod.
I don't know ... From where I'm standing, @Goodguy was, in fact, civil and was unfairly modded. If you're not religious, religious actions sure look insane. He could have written an essay on why, but the theism-vs-atheism debates have all been done to death in the naughty aughties and should stay there. But it at least deserves a mention when people talk about mental health/sanity.
Maybe we’re the first (in our past light cone)? After all, somebody has to be first. It’s theorized that earlier solar systems didn’t have enough heavy elements to support the chemistry of life.
Anyways, you should read Robin Hanson’s paper on grabby aliens.
More options
Context Copy link