@SnapDragon's banner p

SnapDragon


				

				

				
0 followers   follows 0 users  
joined 2022 October 10 20:44:11 UTC
Verified Email

				

User ID: 1550

SnapDragon


				
				
				

				
0 followers   follows 0 users   joined 2022 October 10 20:44:11 UTC

					

No bio...


					

User ID: 1550

Verified Email

Maybe I'm missing some brilliant research out there, but my impression is we scientifically understand what "pain" actually is about as well as we understand what "consciousness" actually is. If you run a client app and it tries and fails to contact a server, is that "pain"? If you give an LLM some text that makes very little sense so it outputs gibberish, is it feeling "pain"? Seems like you could potentially draw out a spectrum of frustrated complex systems that includes silly examples like those all the way up to mosquitos, shrimp, octopuses, cattle, pigs, and humans.

It'd be nice if we could figure out a reasonable compromise for how "complex" a brain needs to be before its pain matters. It really seems like shrimp or insects should fall below that line. But it's like abortion limits - you should pick SOME value in the middle somewhere (it's ridiculous to go all the way to the extremes), but that doesn't mean it's the only correct moral choice.

Uh, you might be confusing income with personal wealth, or you have very strange standards. Having $1.6M doesn't make you particularly rich. Earning $1.6M per year definitely does. Unless you just think that schmoes like George W. Bush (net worth of ~$40M) aren't "rich or elite in a meaningful way".

Yeah, Keanu Reeves (John Wick) is 58, Vin Diesel (Fast X) is 55 and Tom Cruise (MI) is 60. These are fun action franchises, but where are the fun action franchises with up-and-comers who are 20-30? I sure hope Ezra Miller isn't representative of the future of Hollywood "stars"...

Biology and physics are old sciences compared to climate science. And the list of amazing things we've done with biology and physics over the last 200 years is insanely long. I guess you're saying that we should give climate science the same level of veneration, even without actual results and useful predictions, because it (ostensibly) uses the same processes. But even if you pretend that climate science is conducted with the same level of impartial truth-seeking - despite the incredible political pressure behind it - that's still missing the point that science is messy and often gets things wrong. Even in biology (e.g. Lamarckism) or physics (e.g. the aether). It takes hundreds of repeated experiments and validated predictions before a true "consensus" emerges (if even then). Gathering together a consensus and skipping that first step is missing the point.

And remember, skepticism is the default position of science. It's not abnormal. Heck, we had people excitedly testing the EmDrive a few years back, which would violate conservation of momentum! We didn't collectively say "excommunicate the Conservation of Momentum Deniers!"

Regardless, I'm not saying that climate science or the models are entirely useless. Like you said, the greenhouse effect itself is pretty simple and well-understood (though it only accounts for a small portion of the warming that models predict). There's good reason to believe warming will happen. Much less reason to believe it'll be catastrophic, but that's a different topic!

So, I don't know how pleasing you'll find this answer, but the burden of proof is on the models to show their efficacy. A lot of the things you mentioned were very difficult things to do, but we know they work because we see that they work. You don't have to argue about whether Stockfish's chess model captures Truth with a capital T; you can just play 20 games with it, lose all 20, and see. (And of course plenty of things look difficult and ARE still difficult - we don't have cities on the moon yet!)

So, if we had a climate model that everyone could just rely on because its outputs were detailed and verifiably, reliably true, then sure, "this looks like it's a hard thing to do" wouldn't hold much weight. A property of good models is that it should be trivial for them to distinguish themselves from something making lucky guesses. But as far as I know, we don't have this. Instead, we use models to make 50-year predictions for a single hard-to-measure variable (global mean surface temperature) and then 5 years down the line we observe that we're still mostly within predicted error bars. This is not proof that the model represents anything close to Truth.

Now, I don't follow this too closely any more, and maybe there really is some great model that has many different and detailed outputs, with mean temperature predictions that are fairly accurate for different regions of the Earth and parts of the atmosphere and sea, and that properly predicts changes in cloud cover and albedo and humidity and ocean currents and etc. etc. If somebody had formally published accurate predictions for many of these things (NOT just backfitting to already-known data), then I'd believe we feeble humans actually had a good handle on the beast that is climate science. But I suspect this hasn't happened, since climate activists would be shouting it from the rooftops if it had.

Yeah, this is the most shocking stuff The Telegraph (obviously a very biased source) could come up with? The audio they spliced in does sound very panicked, but it doesn't match with much of what's happening in the video. I note that nothing was on fire, and the only thing approaching a weapon that any of the rioters used in that footage was a hockey stick (not clear what they were hitting with it, hopefully not a person). Decidedly NOT what you could say about footage of the BLM riots.

EDIT: I mean, I do agree that it wasn't "peaceful and polite". There was clearly anger, and some people went too far.

This show is absolutely one of the greatest things the BBC ever created. But it's 40 years old, and I often wonder where the next generation's Yes, Minister is. I don't watch a lot of TV (I've seen some of The West Wing, none of Veep or House of Cards), but as far as I know no modern show is worthy of claiming its mantle. Why? Is this the sort of show that can only come from a no-longer-existent world of low BBC budgets, niche high-brow appeal, and writers' willingness to skewer everyone's sacred cow rather than push a one-sided agenda?

Yeah, there's a very relevant xkcd. There are thousands of times more cameras at hand to the general public than there were 50 years ago. If 9/11 happened today we'd have hundreds of videos of the FIRST plane impact - which happened with only seconds of warning - instead of just one. Only 12 years later, there was a huge amount of footage of the Chelyabinsk meteor. Even tsunamis - a relatively more common event with more warning - hadn't really been captured on video much before Japan's in 2011.

Real phenomena, even rare ones, get easier and easier to find footage of as technology increases. "Aliens flitting around the skies in spaceships" does not fit this profile at all.

Cool, cool. So, the obvious follow-up question is, can we just keep those critical federal employees, and drop everyone else? We might even survive firing the seven critical workers who were kept off furlough to keep people away from the Washington Monument.

I'm being a little facetious. You have a point, of course - lots of government services seem extraneous right up until the point where you (or someone else in a worse situation) desperately need them. It would be great if there was an option somewhere between 0% and 100% of our current government, where the first 10% to go isn't the part calculated to maximize spite.

Sorry, it sounds like you want some easy slam-dunk argument against some sort of cartoonish capital-L Libertarian, but that's not who you're speaking to. :) I don't want NO government and NO regulations - of course some regulations are good. But that says nothing about whether we have TOO MUCH government and TOO MUCH regulation right now. Most of the important obviously good stuff has been in the system for decades (if not centuries), because it's, well, important. And even if we kicked legislators out for 51 weeks out of every 52, the important stuff would still pass because it's, well, important. I happen to believe that most of what our modern legislators do IS net-negative, and I'm afraid you can't just hand-wave that away with a strawman argument.

As for YIMBYs, bless your heart Charlie Brown, you keep trying to kick that football. Surely one day they'll win! You yourself linked an article about the dire straits we're in. "Don't try to stop or slow down the government, we need it to fix all the problems caused by the last 50 years of government!"

Eh. I gave him some respect back when he was simply arguing that timelines could be short and the consequences of being wrong could be disastrous, so we should be spending more resources on alignment. This was a correct if not particularly hard argument to make (note that he certainly was not the one who invented AI Safety, despite his hallucinatory claim in "List of Lethalities"), but he did a good job popularizing it.

Then he wrote his April Fool's post and it's all been downhill from here. Now he's an utter embarrassment, and frankly I try my best not to talk about him for the same reason I'd prefer that media outlets stop naming school shooters. The less exposure he gets, the better off we all are.

BTW, as for his "conceptualization of intelligence", it went beyond the tautological "generalized reasoning power" that is, um, kind of the definition. He strongly pushed the Orthogonality Hypothesis (one layer of the tower of assumptions his vision of the future is based around), which is that the space of possible intelligences is vast and AGIs are likely to be completely alien to us, with no hope of mutual understanding. Which is at least a non-trivial claim, but is not doing so hot in the age of LLMs.

Ah, our poor silly ancestors... if only they'd known the modern trick of saying they were keeping the public "safe" from "misinformation".

Maybe shot 5 times? Or maybe 32 times? I suppose there's not much difference between the two.

I do appreciate what you're saying here. I think most people here are just used to the ridiculous media caricatures of Jan. 6, and lumping you into the same bag. I'm not a fan of Trump, but still I could easily imagine myself in the shoes of some of the random people in that crowd. They came for a protest, obviously, not planning to overthrow Congress and impose Trump as El Presidente. Then all of a sudden, they're in the Capitol building, probably having no idea why except that's where the amorphous crowd went. They shout a bit, take a few photos, and go home, then find out that they're now on a watch list and barred from air travel and at serious risk of prosecution.

Oh, and note that one of them was literally shot and killed. The media described this (and four people dying from health issues) as "a protest that led to five deaths." Which is about as honest as reporting that George Floyd "committed a crime at a convenience store that led to one death".

This isn't how we should treat protestors, left or right. You're allowed to protest! And to be clear, the peaceful BLM protestors should also not face any consequences - it's not their fault some opportunists used the protests (and media cover) as a convenient excuse to attack people, set fires, and loot stores.

Then I tried it on Day 7 (adjusting the prompt slightly and letting it just use Code Interpreter on its own). It figured out what it was doing wrong on Part 1 and got it on the second try. Then it did proceed to try a bunch of different things (including some diagnostic output!) and spin and fail on Part 2 without ever finding its bug. Still, this is better than your result, and the things it was trying sure look like "debugging" to me. More evidence that it could do better with different prompting and the right environment.

EDIT: Heh, I added a bit more to the transcript, prodding ChatGPT to see if we could debug together. It produced some test cases to try, but failed pretty hilariously at analyzing the test cases manually. It weakens my argument a bit, but it's interesting enough to include anyway.

So, I gave this a bit of a try myself on Day 3, which ChatGPT failed in your test and on Youtube. While I appreciate that you framed this as a scientific experiment with unvarying prompts and strict objective rules, you're handicapping it compared to a human who has more freedom to play around. Given this, I think your conclusions that it can't debug are a bit too strong.

I wanted to give it more of the flexibility of a human programmer solving AoC, so I made it clear up front that it should brainstorm (I used the magic "think step by step" phrase) and iterate, only using me to try to submit solutions to the site. Then I followed its instructions as it tried to solve the tasks. This is subjective and still pretty awkward, and there was confusion over whether it or I should be running the code; I'm sure there's a better way to give it the proper AoC solving experience. But it was good enough for one test. :) I'd call it a partial success: it thought through possible issues and figured out the two things it was doing wrong on Day 3 Part 1, and got the correct answer on the third try (and then got Part 2 with no issues). The failure, though, is that it never seemed to realize it could use the example in the problem statement to help debug its solution (and I didn't tell it).

Anyway, the transcript's here, if you want to see ChatGPT4 troubleshooting its solution. It didn't use debug output, but it did "think" (whatever that means) about possible mistakes it might have made and alter its code to fix those mistakes, eventually getting it right. That sure seems like debugging to me.

Remember, it's actually kind of difficult to pin down GPT4's capabilities. There are two reasons it might not be using debug output like you want: a) it's incapable, or b) you're not prompting it right. LLMs are strange, fickle beasts.

Yudkowsky's ideas are repulsive because the "father of rationality" isn't applying any rationality at all. He claims absolute certainty over an unknowable domain. He makes no testable predictions. He never updates his stance based on new information (as if Yud circa 2013 already knew exactly what 2023 AI would look like, but didn't deign to tell us). Is there a single example of Yudkowsky admitting he got something wrong about AI safety (except in the thousand-Stalins sense of "things are even worse than I thought")?

In a post-April-Fool's-post world I have no idea why people still listen to this guy.

Well, no... "costs" and "what consumers are willing to pay" are both important factors that go into the price. If the manufacturer's costs go up, then the equilibrium price at which profits are maximized goes up too (although the manufacturer would make less absolute profit overall). That's the real misconception that I think you're pointing at: many people, including the OP, think that prices are completely determined by the seller. In reality, sellers are already maximally greedy, so they want to find this equilibrium price point that maximizes profits. This makes price a signal that they're measuring, not something that they directly control.

Minimum wage debates tend to sadden me, because there's always somebody saying "McDonald's can just compensate by charging $1 more for a burger", making this silly mistake. As if McDonald's is just leaving all that extra money on the table, until it's forced to collect it to pay wages...

I know very little about the topic, but isn't there a fourth possibility: that getting a good absolute ranking in the race is what motivates people to try really hard? A woman in the race you described could kill herself training and still not crack the top 50, which might be a disincentive. If this is true, then having separate events for men and women (or, at least, separate rankings) might result in more serious female competitors.

The idea of accepting election results was uncontroversial on both sides until Trump talked. The benefits of polarization.

This seems like a strange claim to me. Would you classify the two-year investigation of "Russian interference" by a Special Prosecutor as "accepting election results"? "Not My President"? Hillary - the actual losing candidate - calling Trump an illegitimate President? Sadly, the civilized norms had already been well eroded by 2020.

Maybe IP can be justified because it brings value by incentivizing creation?

Um, yes? This is literally the entire and only reason IP exists, so the fact that you have it as one minor side point in your post suggests you've never actually thought seriously about this. A world without IP is a world without professional entertainment, software, or (non-academic) research. Capitalism doesn't deny you the free stuff you feel you richly deserve... it enables its existence in the first place.

I've lost pretty much all respect for Yudkowsky over the years as he's progressed from writing some fun power-fantasy-for-rationalists fiction to being basically a cult leader. People seem to credit him for inventing rationality and AI safety, and to both of those I can only say "huh?". He has arguably named a few known fallacies better than people who came before him, which isn't nothing, but it's sure not "inventing rationality". And in his execrable April Fool's post he actually, truly, seriously claimed to have come up with the idea for AI safety all on his own with no inputs, as if it wasn't a well-trodden sci-fi trope dating from before he was born! Good lord.

I'm embarrassed to admit, at this point, that I donated a reasonable amount of money to MIRI in the past. Why do we spend so much of our time giving resources and attention to a "rationalist" who doesn't even practice rationalism's most basic virtues - intellectual humility and making testable predictions? And now he's threatening to be a spokesman for the AI safety crowd in the mainstream press! If that happens, there's pretty much no upside. Normies may not understand instrumental goals, orthogonality, or mesaoptimizers, but they sure do know how to ignore the frothy-mouthed madman yelling about the world ending from the street corner.

I'm perfectly willing to listen to an argument that AI safety is an important field that we are not treating seriously enough. I'm willing to listen to the argument of the people who signed the recent AI-pause letter, though I don't agree with them. But EY is at best just wasting our time with delusionally over-confident claims. I really hope rationality can outgrow (and start ignoring) him. (...am I being part of the problem by spending three paragraphs talking about him? Sigh.)

Part of the problem is that the American age of consent is a bit ludicrous - by the time you're 18 you've already spent a third of your life sexually aware, and most people lose their virginity long before then. So it's very important to clarify whether one is talking about a) actual rape of prepubescent children, or b) mutually consensual sexual encounters that are biologically normal, legal in most of the world, and just happen to be called "statutory rape" in America.

I find it particularly concerning that progressives hold the position that teens are capable of deciding they're trans (complete with devastatingly life-altering physical interventions) when they're young but not capable of deciding they want sex (which is a hell of a lot safer, done responsibly). This just seems incoherent.

So, I guess your argument is that it doesn't feel icky because you claim he's lying when he says he's doing the icky thing, and his hidden motivation is more practical (and, well, moral)? That's still beside the point - the fact that Dems are completely fine with announcing a racist appointment is the problem, not the 4D chess Newsom might be playing.

Also, I actually do think Newsom would have chosen somebody completely unsuitable, with the right characteristics, if he'd had to. We've seen a string of skin-colour-and-genital based appointments already from the Dems, from Karine Jean-Pierre to Ketanji Brown Jackson to Kamala Harris herself. I'm sure there are more, but I don't pay that much attention. It would be coincidental if all these people, selected from a favoured 6% of the population, really were the best choices. It really does seem like this is just what you have to do to play ball on the Democrat side.

Ugh, what a ridiculous take. The ability to move a body and process senses and learn behaviour that generates food is miraculous, yes. We can't build machines that come close to this yet. It's amazing that birds can do it! And humans! And cats, dogs, pigs, mice, ants, mosquitos, and 80 million other species too. Gosh, wow, I'm so agog at the numinous wondrousness of nature.

That doesn't make it intelligence. Humans are special. Intelligence is special. Until transformers and LLMs, every single story, coherent conversation, and, yes, Advent of Code solution was the creation of a human being. Even if all development stops here, even if LLMs never get smarter and these chatbots continue to have weird failure modes for you to sneer at, something fundamental has changed in the world.

Do you think you're being super deep by redefining intelligence as "doing what birds can do?" I'd expect that from a stoner, not from a long-standing mottizen. Words MEAN things, you know. If you'd rather change your vocabulary than your mind, I don't think we have anything more to discuss.