@SnapDragon's banner p

SnapDragon


				

				

				
0 followers   follows 0 users  
joined 2022 October 10 20:44:11 UTC
Verified Email

				

User ID: 1550

SnapDragon


				
				
				

				
0 followers   follows 0 users   joined 2022 October 10 20:44:11 UTC

					

No bio...


					

User ID: 1550

Verified Email

So, I admit this is a well-written, convincing argument. It's appreciated! But I still find it contrasts with common sense (and my own lying eyes). I can, say, imagine authorities arresting me and demanding to know my email password. I would not cooperate, and I would expect to be able to get access to a lawyer before long. In reality there's only one way they'd get the password: torturing me. And in that case, they'd get the password immediately. It would be fast and effective. I'm still going to trust the knowledge that torture would work perfectly on me over a sociological essay, no matter how eloquent.

Ugh, what a ridiculous take. The ability to move a body and process senses and learn behaviour that generates food is miraculous, yes. We can't build machines that come close to this yet. It's amazing that birds can do it! And humans! And cats, dogs, pigs, mice, ants, mosquitos, and 80 million other species too. Gosh, wow, I'm so agog at the numinous wondrousness of nature.

That doesn't make it intelligence. Humans are special. Intelligence is special. Until transformers and LLMs, every single story, coherent conversation, and, yes, Advent of Code solution was the creation of a human being. Even if all development stops here, even if LLMs never get smarter and these chatbots continue to have weird failure modes for you to sneer at, something fundamental has changed in the world.

Do you think you're being super deep by redefining intelligence as "doing what birds can do?" I'd expect that from a stoner, not from a long-standing mottizen. Words MEAN things, you know. If you'd rather change your vocabulary than your mind, I don't think we have anything more to discuss.

Wow, you're really doubling down on that link to a video of a bird fishing with bread. And in your mind, this is somehow comparable to holding a complex conversation and solving Advent of Code problems. I honestly don't know what to say to that.

Really, the only metric that I need is that ChatGPT makes me more productive in my job and personal projects. If you think that's "unreasonably low", well, I hope that our eventual AI Overlords can hope to meet your stringent requirements. The rest of the human race won't care.

In fact, one line of argument for theism is that math is unreasonably useful here.

Um, what? It really is "heads I win, tails you lose" with theism, isn't it? I guarantee no ancient theologian was saying "I sure hope that all of Creation, including our own biology and brains, turns out to be describable by simple mathematical rules; that would REALLY cement my belief in God, unlike all this ineffability nonsense."

Maybe I'm missing some brilliant research out there, but my impression is we scientifically understand what "pain" actually is about as well as we understand what "consciousness" actually is. If you run a client app and it tries and fails to contact a server, is that "pain"? If you give an LLM some text that makes very little sense so it outputs gibberish, is it feeling "pain"? Seems like you could potentially draw out a spectrum of frustrated complex systems that includes silly examples like those all the way up to mosquitos, shrimp, octopuses, cattle, pigs, and humans.

It'd be nice if we could figure out a reasonable compromise for how "complex" a brain needs to be before its pain matters. It really seems like shrimp or insects should fall below that line. But it's like abortion limits - you should pick SOME value in the middle somewhere (it's ridiculous to go all the way to the extremes), but that doesn't mean it's the only correct moral choice.

Then I tried it on Day 7 (adjusting the prompt slightly and letting it just use Code Interpreter on its own). It figured out what it was doing wrong on Part 1 and got it on the second try. Then it did proceed to try a bunch of different things (including some diagnostic output!) and spin and fail on Part 2 without ever finding its bug. Still, this is better than your result, and the things it was trying sure look like "debugging" to me. More evidence that it could do better with different prompting and the right environment.

EDIT: Heh, I added a bit more to the transcript, prodding ChatGPT to see if we could debug together. It produced some test cases to try, but failed pretty hilariously at analyzing the test cases manually. It weakens my argument a bit, but it's interesting enough to include anyway.

So, I gave this a bit of a try myself on Day 3, which ChatGPT failed in your test and on Youtube. While I appreciate that you framed this as a scientific experiment with unvarying prompts and strict objective rules, you're handicapping it compared to a human who has more freedom to play around. Given this, I think your conclusions that it can't debug are a bit too strong.

I wanted to give it more of the flexibility of a human programmer solving AoC, so I made it clear up front that it should brainstorm (I used the magic "think step by step" phrase) and iterate, only using me to try to submit solutions to the site. Then I followed its instructions as it tried to solve the tasks. This is subjective and still pretty awkward, and there was confusion over whether it or I should be running the code; I'm sure there's a better way to give it the proper AoC solving experience. But it was good enough for one test. :) I'd call it a partial success: it thought through possible issues and figured out the two things it was doing wrong on Day 3 Part 1, and got the correct answer on the third try (and then got Part 2 with no issues). The failure, though, is that it never seemed to realize it could use the example in the problem statement to help debug its solution (and I didn't tell it).

Anyway, the transcript's here, if you want to see ChatGPT4 troubleshooting its solution. It didn't use debug output, but it did "think" (whatever that means) about possible mistakes it might have made and alter its code to fix those mistakes, eventually getting it right. That sure seems like debugging to me.

Remember, it's actually kind of difficult to pin down GPT4's capabilities. There are two reasons it might not be using debug output like you want: a) it's incapable, or b) you're not prompting it right. LLMs are strange, fickle beasts.

I'm glad that, at the start, you (correctly) emphasized that we're talking about intelligence gathering. So please don't fall back to the motte of "I only meant that confessions couldn't be trusted", which you're threatening to do by bringing up the judicial system and "people admitting to things". Some posters did that in the last argument, too. I don't know how many times I can repeat that, duh, torture-extracted confessions aren't legitimate. But confessions and intelligence gathering are completely different things.

Torture being immoral is a fully sufficient explanation for it being purged from our systems. So your argument is worse than useless when it comes to effectiveness - because it actually raises the question of why Western intelligence agencies were still waterboarding people in the 2000s. Why would they keep doing something that's both immoral and ineffective? Shouldn't they have noticed?

When you have a prisoner who knows something important, there are lots of ways of applying pressure. Sometimes you can get by with compassion, negotiation, and so on, which is great. But the horrible fact is that pain has always been the most effective way to get someone to do what you want. There will be some people who will never take a deal, who will never repent, but will still break under torture and give you the information you want. Yes, if you have the wrong person they'll make something up. Even if you have the right person but they're holding out, they might feed you false information (which they might do in all other scenarios, too). Torture is a tool in your arsenal that may be the only way to produce that one address or name or password that you never would have gotten otherwise, but you'll still have to apply the other tools at your disposal too.

Sigh. The above paragraph is obvious and not insightful, and I feel silly having to spell it out. But hey, in some sense it's a good thing that there are people so sheltered that they can pretend pain doesn't work to get evil people what they want. It points to how nice a civilization we've built for ourselves, how absent cruelty ("barbarism", as you put it) is from most people's day-to-day existence.

It's more of a variation of your first possibility, but RT could also be acting out of principal-agent problems, not at the behest of Hollywood executives. The explanations probably overlap. There's also the possibility that they care about their credibility every bit as much as they did in the past, but it's their credibility among tastemakers that's important, not the rabble.

Yeah, I'd be surprised if RT's review aggregation takes "marching orders" from any executives. In fact, I think RT is owned indirectly by Warner Bros., so if anything you'd expect they'd be "adjusting" Disney movies unfavorably. I like your explanation that RT's just sincerely trying to appease the Hollywood elite, rather than provide a useful signal to the masses. It fits.

I'm not sure why you'd put a low prior on the first, though. Particularly for high visibility productions, "everyone" knows to take politics into account when reading reviews. Positively weighting aligned reviews doesn't seem like an incredible step beyond that.

I knew to take that into account with the critics score, which I would usually ignore for the "woke" crap. But in the past I've generally found the audience score trustworthy. Maybe I was just naive, and it took a ridiculous outlier for me to finally notice that they have their fingers on every scale.

Heh, yeah, good example. I happily commit atrocities in videogames all the time. I hope there will continue to be an obvious, bright-line distinction between entities made for our amusement and entities with sentience!

I'm not putting limits on anything. The problem with the "ascension" idea isn't that it's impossible - we can't rule it out - but it's that every single member of the ascending civilization, unanimously, would have to stop caring about (or affecting, even by accident) the physical galaxy and the rest of the civilizations in it. Despite a lot of fun sci-fi tropes, ascension isn't some macguffin you build and then everybody disappears. Our modern civilization didn't stop affecting the savannah just because most of us "ascended" out of there. I consider the explanation "everybody's super powerful but also invisible, coincidentally leaving the galaxy looking indistinguishable from an uncivilized one" to be very unlikely. (Not impossible, though.)

What do you think our long-term future in the galaxy looks like? Is it really likely that our technological civilization will just poof out with no real impact? (Even the AI doom scenario involves a superintelligence that will start gobbling up the reachable Universe.) This is the argument underlying the Fermi Paradox: we have only one example of an intelligent civilization, and there seems to be little standing in the way of us spreading through and changing the galaxy in an unmissable way. Interstellar travel is quite hard, but not impossibly so. The time scale for this would be measured in millions of years, which is barely a hiccup in cosmological terms. So why didn't someone else do it first?

On a similar note, I'm very confident I'm not standing next to a nuclear explosion (probability well below 0.001%). Am I overconfident? Ok, yes, I'm being a bit cheeky - the effects of a nuclear explosion are well understood, after all. The chance that there's a "great filter" in our future that would stop us and all similar civilizations from spreading exponentially is a lot larger than 0.001%.

Hi, bullish ML developer here, who is very familiar with what's going on "under the hood". Maybe try not calling the many, many people who disagree with you idiots? It certainly does not "suck at following all but the simplest of instructions", unless you've raised this subjective metric so high that much of the human race would fail your criterion. And while I agree that the hallucination problem is fundamental to the architecture, it has nothing to do with GPT4's reasoning capabilities or lack thereof. If you actually had a "deep understanding" of what's going on under the hood, you'd be aware of this. It's because GPT4 (the model) and ChatGPT (the intelligent oracle it's trying to predict) are distinct entities which do not match perfectly. GPT4 might reasonably guess that ChatGPT would start a response with "the answer is..." even if GPT4 itself doesn't know the answer ... and then the algorithm picks the next word from GPT4's probability distribution anyway, causing a hallucination. Tuning can help reduce the disparity between these entities, but it seems unlikely that we'll ever get it to work perfectly. A new idea will be needed (like, perhaps, an algorithm that does a directed search on response phrases rather than greedily picking unchangeable words one by one).

To be honest, it sounds like you don't have much experience with ChatGPT4 yourself, and think that the amusing failures you read about on blogs (selected because they are amusing) are representative. Let me try to push back on your selection bias with some fairly typical conversations I've had with it (asking for coding help): 1, 2. These aren't selected to be amusing; ChatGPT4 doesn't get everything right, nor does it fail spectacularly. But it does keep up its end of a detailed, unprecedented conversation with no trouble at all.

Sorry, it sounds like you want some easy slam-dunk argument against some sort of cartoonish capital-L Libertarian, but that's not who you're speaking to. :) I don't want NO government and NO regulations - of course some regulations are good. But that says nothing about whether we have TOO MUCH government and TOO MUCH regulation right now. Most of the important obviously good stuff has been in the system for decades (if not centuries), because it's, well, important. And even if we kicked legislators out for 51 weeks out of every 52, the important stuff would still pass because it's, well, important. I happen to believe that most of what our modern legislators do IS net-negative, and I'm afraid you can't just hand-wave that away with a strawman argument.

As for YIMBYs, bless your heart Charlie Brown, you keep trying to kick that football. Surely one day they'll win! You yourself linked an article about the dire straits we're in. "Don't try to stop or slow down the government, we need it to fix all the problems caused by the last 50 years of government!"

"Brutally" slaughtering a pig in "disgusting" "industrial" conditions? Those are very subjective words. The pig doesn't care that it's not being given a dignified sendoff by its loving family at the end of a fulfilled life in a beautiful grassy glade with dandelions wafting in the breeze. Humans fear death; animals don't even understand the concept. As long as we kill them quickly, I really don't give a shit how it's done.

Which isn't to say I don't have concerns about factory farming. The rest of the pig's life may be filled with suffering, and (IMO) we're rich enough, as a society, to do better. My morality-o-meter is ok with sacrificing, say, 0.01% of value to humans to improve the life of pigs by 500%.

So, I guess your argument is that it doesn't feel icky because you claim he's lying when he says he's doing the icky thing, and his hidden motivation is more practical (and, well, moral)? That's still beside the point - the fact that Dems are completely fine with announcing a racist appointment is the problem, not the 4D chess Newsom might be playing.

Also, I actually do think Newsom would have chosen somebody completely unsuitable, with the right characteristics, if he'd had to. We've seen a string of skin-colour-and-genital based appointments already from the Dems, from Karine Jean-Pierre to Ketanji Brown Jackson to Kamala Harris herself. I'm sure there are more, but I don't pay that much attention. It would be coincidental if all these people, selected from a favoured 6% of the population, really were the best choices. It really does seem like this is just what you have to do to play ball on the Democrat side.

Well, sure, in a vacuum most people gravitate towards censoring speech they don't like. That doesn't mean it's a good idea. We shouldn't structure society around people's natural destructive impulses; we should structure society around what allows humans to flourish. And we've known for centuries that that is a free and open exchange of ideas. Not because there are no ideas which are genuinely harmful! But because humans and human organizations are too fickle, ignorant, and self-interested to be trusted as arbiters of which ideas meet that standard.

I had an argument about torture here just a few weeks ago.

Bluntly, I absolutely do not buy that torture is "inherently useless". It's an extremely counterintuitive claim. I'm inherently suspicious whenever somebody claims that their political belief also comes with no tradeoffs. And the "torture doesn't work" argument fits the mold of a contrarian position where intellectuals can present cute, clever arguments that "overturn" common sense (and will fortunately never be tested in the real world). It's basically the midwit meme where people get to just the right level of cleverness to be wrong.

I'm on Apple's AI/ML team, but I can't really go into details.

Me personally? Yes, for all the things you listed. But is that really all that surprising? We're on The Motte. The only one you listed that people here would really find controversial is CP, and while I (of course) agree that creating real CP should be illegal, sharing virtual/generated CP harms nobody and should be allowed. (This is basically the situation we're already in with hentai, which is full of hand-drawn underage porn.)

But if you want issues that do challenge my stance, I'd suggest revenge porn, doxxing or the Right To Be Forgotten. So, you're right that my "free speech maximalism" only goes so far; there's always something in this complex world that doesn't have an easy answer.

You might be interested in Greg Egan's book Permutation City, which takes this (as he calls it) Dust Theory, and runs with it to the extreme.

Or maybe at Mr. Burns' birthday party...

The law of non contradiction: "Not both A and not A" or "¬(p ∧ ¬p)". Is another first principle.

That one's pretty uncontroversial, but the more interesting one is the law of excluded middle: "either A or not A". We all learn it, but there's a school of thought (intuitionism) that this shouldn't be a basic law. And indeed there are some weeeeeeeird results in math that go away (or become less weird) if you don't allow proof by contradiction.

I'd be ambivalent if it was just a few instances, but it really feels like he's exploiting the system. I wouldn't come to themotte if every other top-level post was one person soapboxing about da joos. HBD was similar: Yes, this is (intended to be) one of the few places on the Internet you can freely debate it, but it shouldn't be the only topic of discussion...

...are you seriously asking this? I'm not an insect. If you want to claim some observation of insect behavior has even the slightest relevance to human society, the burden of proof's on you.