@SnapDragon's banner p

SnapDragon


				

				

				
0 followers   follows 0 users  
joined 2022 October 10 20:44:11 UTC
Verified Email

				

User ID: 1550

SnapDragon


				
				
				

				
0 followers   follows 0 users   joined 2022 October 10 20:44:11 UTC

					

No bio...


					

User ID: 1550

Verified Email

Ugh, what a ridiculous take. The ability to move a body and process senses and learn behaviour that generates food is miraculous, yes. We can't build machines that come close to this yet. It's amazing that birds can do it! And humans! And cats, dogs, pigs, mice, ants, mosquitos, and 80 million other species too. Gosh, wow, I'm so agog at the numinous wondrousness of nature.

That doesn't make it intelligence. Humans are special. Intelligence is special. Until transformers and LLMs, every single story, coherent conversation, and, yes, Advent of Code solution was the creation of a human being. Even if all development stops here, even if LLMs never get smarter and these chatbots continue to have weird failure modes for you to sneer at, something fundamental has changed in the world.

Do you think you're being super deep by redefining intelligence as "doing what birds can do?" I'd expect that from a stoner, not from a long-standing mottizen. Words MEAN things, you know. If you'd rather change your vocabulary than your mind, I don't think we have anything more to discuss.

Wow, you're really doubling down on that link to a video of a bird fishing with bread. And in your mind, this is somehow comparable to holding a complex conversation and solving Advent of Code problems. I honestly don't know what to say to that.

Really, the only metric that I need is that ChatGPT makes me more productive in my job and personal projects. If you think that's "unreasonably low", well, I hope that our eventual AI Overlords can hope to meet your stringent requirements. The rest of the human race won't care.

Heh, yeah, good example. I happily commit atrocities in videogames all the time. I hope there will continue to be an obvious, bright-line distinction between entities made for our amusement and entities with sentience!

I'm not putting limits on anything. The problem with the "ascension" idea isn't that it's impossible - we can't rule it out - but it's that every single member of the ascending civilization, unanimously, would have to stop caring about (or affecting, even by accident) the physical galaxy and the rest of the civilizations in it. Despite a lot of fun sci-fi tropes, ascension isn't some macguffin you build and then everybody disappears. Our modern civilization didn't stop affecting the savannah just because most of us "ascended" out of there. I consider the explanation "everybody's super powerful but also invisible, coincidentally leaving the galaxy looking indistinguishable from an uncivilized one" to be very unlikely. (Not impossible, though.)

What do you think our long-term future in the galaxy looks like? Is it really likely that our technological civilization will just poof out with no real impact? (Even the AI doom scenario involves a superintelligence that will start gobbling up the reachable Universe.) This is the argument underlying the Fermi Paradox: we have only one example of an intelligent civilization, and there seems to be little standing in the way of us spreading through and changing the galaxy in an unmissable way. Interstellar travel is quite hard, but not impossibly so. The time scale for this would be measured in millions of years, which is barely a hiccup in cosmological terms. So why didn't someone else do it first?

On a similar note, I'm very confident I'm not standing next to a nuclear explosion (probability well below 0.001%). Am I overconfident? Ok, yes, I'm being a bit cheeky - the effects of a nuclear explosion are well understood, after all. The chance that there's a "great filter" in our future that would stop us and all similar civilizations from spreading exponentially is a lot larger than 0.001%.

So, I admit this is a well-written, convincing argument. It's appreciated! But I still find it contrasts with common sense (and my own lying eyes). I can, say, imagine authorities arresting me and demanding to know my email password. I would not cooperate, and I would expect to be able to get access to a lawyer before long. In reality there's only one way they'd get the password: torturing me. And in that case, they'd get the password immediately. It would be fast and effective. I'm still going to trust the knowledge that torture would work perfectly on me over a sociological essay, no matter how eloquent.

Really? Name the centuries-old historical counterpart to movies on DVD, music on CD, videogames, software suites, drug companies, ... I could go on. Sure, people used to go to live plays and concerts. Extremely rich patrons used to personally fund the top 0.1% of scientists and musicians. It was not the same.

I'm on Apple's AI/ML team, but I can't really go into details.

Me personally? Yes, for all the things you listed. But is that really all that surprising? We're on The Motte. The only one you listed that people here would really find controversial is CP, and while I (of course) agree that creating real CP should be illegal, sharing virtual/generated CP harms nobody and should be allowed. (This is basically the situation we're already in with hentai, which is full of hand-drawn underage porn.)

But if you want issues that do challenge my stance, I'd suggest revenge porn, doxxing or the Right To Be Forgotten. So, you're right that my "free speech maximalism" only goes so far; there's always something in this complex world that doesn't have an easy answer.

You might be interested in Greg Egan's book Permutation City, which takes this (as he calls it) Dust Theory, and runs with it to the extreme.

Or maybe at Mr. Burns' birthday party...

The law of non contradiction: "Not both A and not A" or "¬(p ∧ ¬p)". Is another first principle.

That one's pretty uncontroversial, but the more interesting one is the law of excluded middle: "either A or not A". We all learn it, but there's a school of thought (intuitionism) that this shouldn't be a basic law. And indeed there are some weeeeeeeird results in math that go away (or become less weird) if you don't allow proof by contradiction.

It's more of a variation of your first possibility, but RT could also be acting out of principal-agent problems, not at the behest of Hollywood executives. The explanations probably overlap. There's also the possibility that they care about their credibility every bit as much as they did in the past, but it's their credibility among tastemakers that's important, not the rabble.

Yeah, I'd be surprised if RT's review aggregation takes "marching orders" from any executives. In fact, I think RT is owned indirectly by Warner Bros., so if anything you'd expect they'd be "adjusting" Disney movies unfavorably. I like your explanation that RT's just sincerely trying to appease the Hollywood elite, rather than provide a useful signal to the masses. It fits.

I'm not sure why you'd put a low prior on the first, though. Particularly for high visibility productions, "everyone" knows to take politics into account when reading reviews. Positively weighting aligned reviews doesn't seem like an incredible step beyond that.

I knew to take that into account with the critics score, which I would usually ignore for the "woke" crap. But in the past I've generally found the audience score trustworthy. Maybe I was just naive, and it took a ridiculous outlier for me to finally notice that they have their fingers on every scale.

Technically Bing was using it before then, but good point. It's insane how fast things are progressing.

Maybe I'm missing some brilliant research out there, but my impression is we scientifically understand what "pain" actually is about as well as we understand what "consciousness" actually is. If you run a client app and it tries and fails to contact a server, is that "pain"? If you give an LLM some text that makes very little sense so it outputs gibberish, is it feeling "pain"? Seems like you could potentially draw out a spectrum of frustrated complex systems that includes silly examples like those all the way up to mosquitos, shrimp, octopuses, cattle, pigs, and humans.

It'd be nice if we could figure out a reasonable compromise for how "complex" a brain needs to be before its pain matters. It really seems like shrimp or insects should fall below that line. But it's like abortion limits - you should pick SOME value in the middle somewhere (it's ridiculous to go all the way to the extremes), but that doesn't mean it's the only correct moral choice.

I tried on Day 10 and it failed. I want to avoid publication bias, though, so I'm posting the transcript anyway. :) Note that it IS using debug output to try to figure out its error, but I think it's analyzing it incorrectly.

Then I tried it on Day 7 (adjusting the prompt slightly and letting it just use Code Interpreter on its own). It figured out what it was doing wrong on Part 1 and got it on the second try. Then it did proceed to try a bunch of different things (including some diagnostic output!) and spin and fail on Part 2 without ever finding its bug. Still, this is better than your result, and the things it was trying sure look like "debugging" to me. More evidence that it could do better with different prompting and the right environment.

EDIT: Heh, I added a bit more to the transcript, prodding ChatGPT to see if we could debug together. It produced some test cases to try, but failed pretty hilariously at analyzing the test cases manually. It weakens my argument a bit, but it's interesting enough to include anyway.

So, I gave this a bit of a try myself on Day 3, which ChatGPT failed in your test and on Youtube. While I appreciate that you framed this as a scientific experiment with unvarying prompts and strict objective rules, you're handicapping it compared to a human who has more freedom to play around. Given this, I think your conclusions that it can't debug are a bit too strong.

I wanted to give it more of the flexibility of a human programmer solving AoC, so I made it clear up front that it should brainstorm (I used the magic "think step by step" phrase) and iterate, only using me to try to submit solutions to the site. Then I followed its instructions as it tried to solve the tasks. This is subjective and still pretty awkward, and there was confusion over whether it or I should be running the code; I'm sure there's a better way to give it the proper AoC solving experience. But it was good enough for one test. :) I'd call it a partial success: it thought through possible issues and figured out the two things it was doing wrong on Day 3 Part 1, and got the correct answer on the third try (and then got Part 2 with no issues). The failure, though, is that it never seemed to realize it could use the example in the problem statement to help debug its solution (and I didn't tell it).

Anyway, the transcript's here, if you want to see ChatGPT4 troubleshooting its solution. It didn't use debug output, but it did "think" (whatever that means) about possible mistakes it might have made and alter its code to fix those mistakes, eventually getting it right. That sure seems like debugging to me.

Remember, it's actually kind of difficult to pin down GPT4's capabilities. There are two reasons it might not be using debug output like you want: a) it's incapable, or b) you're not prompting it right. LLMs are strange, fickle beasts.

Interesting. I admit ignorance here - I just assumed any UK-based newspaper would be very far to the left. (The video itself still seemed pretty biased to me.) Thanks for the correction.

Eh, I'm sure it'll be fine. Nintendo execs are famously pretty chill.

I'm in the same position; but I suspect I'll end up giving WSL a try instead. (I've used Cygwin for decades.)

In fact, one line of argument for theism is that math is unreasonably useful here.

Um, what? It really is "heads I win, tails you lose" with theism, isn't it? I guarantee no ancient theologian was saying "I sure hope that all of Creation, including our own biology and brains, turns out to be describable by simple mathematical rules; that would REALLY cement my belief in God, unlike all this ineffability nonsense."

Absolutely. And I'm totally being a pedant about a policy I'm in complete agreement with. But this nitpicking is still valuable - if we as a society understand that we're banning torture for very good ideological reasons, then we won't be so tempted to backslide the next time a crisis (like 9/11) arises and people start noticing that (arguably) torture might help us track down more terrorists. Like how some people forget that free speech ideals are important beyond simply making sure that we don't violate the 1st amendment.

Well yeah, I don't disagree with any of it either so I don't really see what your point is?

But ... if you agree there are scenarios where you'd never get a particular piece of information without torture, then I don't understand how you can claim it's "inherently useless"...? I'm confused what we're even arguing about now.

Why should they notice? Institutions do immoral and ineffective things literally all the time for centuries on end. And we're talking about the CIA, the kings of spending money on absolute bullshit that just sounds cool to some dudes in a room, and that's not saying nothing given the competition for that title in USG.

A fair point! I'm never going to argue with "government is incompetent" being an answer. :) But still, agencies using it is evidence that points in the direction of torture being useful - incompetence is just a (very plausible) explanation for why that evidence isn't conclusive.

I'm glad that, at the start, you (correctly) emphasized that we're talking about intelligence gathering. So please don't fall back to the motte of "I only meant that confessions couldn't be trusted", which you're threatening to do by bringing up the judicial system and "people admitting to things". Some posters did that in the last argument, too. I don't know how many times I can repeat that, duh, torture-extracted confessions aren't legitimate. But confessions and intelligence gathering are completely different things.

Torture being immoral is a fully sufficient explanation for it being purged from our systems. So your argument is worse than useless when it comes to effectiveness - because it actually raises the question of why Western intelligence agencies were still waterboarding people in the 2000s. Why would they keep doing something that's both immoral and ineffective? Shouldn't they have noticed?

When you have a prisoner who knows something important, there are lots of ways of applying pressure. Sometimes you can get by with compassion, negotiation, and so on, which is great. But the horrible fact is that pain has always been the most effective way to get someone to do what you want. There will be some people who will never take a deal, who will never repent, but will still break under torture and give you the information you want. Yes, if you have the wrong person they'll make something up. Even if you have the right person but they're holding out, they might feed you false information (which they might do in all other scenarios, too). Torture is a tool in your arsenal that may be the only way to produce that one address or name or password that you never would have gotten otherwise, but you'll still have to apply the other tools at your disposal too.

Sigh. The above paragraph is obvious and not insightful, and I feel silly having to spell it out. But hey, in some sense it's a good thing that there are people so sheltered that they can pretend pain doesn't work to get evil people what they want. It points to how nice a civilization we've built for ourselves, how absent cruelty ("barbarism", as you put it) is from most people's day-to-day existence.