@SnapDragon's banner p

SnapDragon


				

				

				
0 followers   follows 0 users  
joined 2022 October 10 20:44:11 UTC
Verified Email

				

User ID: 1550

SnapDragon


				
				
				

				
0 followers   follows 0 users   joined 2022 October 10 20:44:11 UTC

					

No bio...


					

User ID: 1550

Verified Email

Not sure if this has been mentioned before, but on the topic of The Little Mermaid, I am extremely confused by the Rotten Tomatoes score. The "audience score" has been fixed at 95% since launch, which is insanely high. The critics score is a more-believable 67%. Note that the original 1989 cartoon - one of my favorite movies growing up, a gorgeous movie that kickstarted an era of Disney masterpieces - only has an 88% audience score. Also, Peter Pan & Wendy, another woke remake coming out at almost the same time, has an audience score of 11%. And recall that the first time Rotten Tomatoes changed their aggregation algorithm was actually in response to Captain Marvel's "review bombing", another important and controversial Disney movie.

If you click through to the "all audiences" score, it's in the 50% range. And metacritic's audience score is 2.2 out of 10. The justification I've heard in leftist spaces is that the movie's getting review bombed by people who haven't seen it. And there certainly is a wave of hatred for this movie (including from me, because the woke plot changes sound dreadful). How plausible is this? I haven't seen the movie myself, so it's possible that it actually is decent enough for the not-terminally-online normies to enjoy. But even using that explanation, how is 95% possible?

Right now I only see two possibilities:

  • Rotten Tomatoes has stopped caring about their long-term credibility, and they're happy to put their finger on the scale in a RIDICULOUSLY obvious way for movies that are important to the Hollywood machine. I should stop trusting them completely and go to Metacritic.

  • People like me who have become super sensitive to wokeness already knew they'd hate the movie and didn't see it; for the "verified" audience, TLM is actually VERY enjoyable, and the 95% rating is real.

But, to be honest, I would have put a low prior on BOTH of these possibilities before TLM came out. Is there a third that I'm missing?

There are a lot of really good answers in this thread, reasons why historically unions have been a good idea (even if some notable examples have gone too far), but I want to point out that they almost entirely apply to private-sector unions. In the US we also have truly massive PUBLIC-sector unions, which (as far as I know) there is almost no good justification for. Their power derives from the government, which means that when they "negotiate", the government is the one on both sides of the table (negotiating about money that, as always, isn't theirs). It's always seemed insane to me, but maybe somebody here has a good justification...?

Lockdowns aren't on the pareto frontier of policy options for even diseases significantly deadlier than covid imo, just because rapid development and distribution of technological solutions is possible, but ... covid killed one million people in the united states. Yes, mostly old people, but we're talking about protecting old people here. No reason to pretend otherwise.

Speaking of government policy, I wonder how many lives were lost because we couldn't conduct challenge trials on COVID? It was almost the ideal case - a disease with a rapidly-developed, experimental new vaccine and a large cohort of people (anyone under 40) for which it wasn't threatening. If we were a serious society - genuinely trying to optimize lives saved, rather than performatively closing churches and masking toddlers - I wonder how early we could have rolled out RNA vaccines for the elderly?

I had an argument about torture here just a few weeks ago.

Bluntly, I absolutely do not buy that torture is "inherently useless". It's an extremely counterintuitive claim. I'm inherently suspicious whenever somebody claims that their political belief also comes with no tradeoffs. And the "torture doesn't work" argument fits the mold of a contrarian position where intellectuals can present cute, clever arguments that "overturn" common sense (and will fortunately never be tested in the real world). It's basically the midwit meme where people get to just the right level of cleverness to be wrong.

Indeed, journalistic standards are loose enough that absolutely anything can be framed to make men look inferior or women victimized.

  • "Men are discriminated against in college admission" -> "Men aren't applying themselves in school"

  • "Women are saved first in emergencies" -> "Men treat women as weak and lacking agency"

  • "Women are admired for their beauty" -> "Women are objectified"

  • "Men commit violence more" -> "Men commit violence more" (no dissonance here!)

  • "Men are more often the victims of violence" -> "Women feel less safe than ever, study finds"

  • "Men die in wars" -> "Women lose their fathers, husbands, sons"

  • "Men commit suicide more" -> "Women attempt suicide more"

  • "Men literally die younger" -> "Women are forced to pay more for health insurance" (honestly, I've admired the twisted brilliance of this framing ever since the Obamacare debates)

Hi, bullish ML developer here, who is very familiar with what's going on "under the hood". Maybe try not calling the many, many people who disagree with you idiots? It certainly does not "suck at following all but the simplest of instructions", unless you've raised this subjective metric so high that much of the human race would fail your criterion. And while I agree that the hallucination problem is fundamental to the architecture, it has nothing to do with GPT4's reasoning capabilities or lack thereof. If you actually had a "deep understanding" of what's going on under the hood, you'd be aware of this. It's because GPT4 (the model) and ChatGPT (the intelligent oracle it's trying to predict) are distinct entities which do not match perfectly. GPT4 might reasonably guess that ChatGPT would start a response with "the answer is..." even if GPT4 itself doesn't know the answer ... and then the algorithm picks the next word from GPT4's probability distribution anyway, causing a hallucination. Tuning can help reduce the disparity between these entities, but it seems unlikely that we'll ever get it to work perfectly. A new idea will be needed (like, perhaps, an algorithm that does a directed search on response phrases rather than greedily picking unchangeable words one by one).

To be honest, it sounds like you don't have much experience with ChatGPT4 yourself, and think that the amusing failures you read about on blogs (selected because they are amusing) are representative. Let me try to push back on your selection bias with some fairly typical conversations I've had with it (asking for coding help): 1, 2. These aren't selected to be amusing; ChatGPT4 doesn't get everything right, nor does it fail spectacularly. But it does keep up its end of a detailed, unprecedented conversation with no trouble at all.

Oof. You know you've gone off the far-left deep end when governor Newsom, of all people, is lightly coughing and hinting that this is unaffordable. So now my California tax dollars will be going towards supporting a strike for WGA workers who, in 2020, were earning a bare minimum of $4,546 a week. (I know the numbers in the current contract under negotiation were leaked, but I'm having a hard time finding a good source...? I suspect most of the media is on the side of any union, anywhere, anytime and would very much not like the hoi polloi to find out just how rich these brave freedom fighters actually are.)

Part of the problem is that the American age of consent is a bit ludicrous - by the time you're 18 you've already spent a third of your life sexually aware, and most people lose their virginity long before then. So it's very important to clarify whether one is talking about a) actual rape of prepubescent children, or b) mutually consensual sexual encounters that are biologically normal, legal in most of the world, and just happen to be called "statutory rape" in America.

I find it particularly concerning that progressives hold the position that teens are capable of deciding they're trans (complete with devastatingly life-altering physical interventions) when they're young but not capable of deciding they want sex (which is a hell of a lot safer, done responsibly). This just seems incoherent.

I'm glad that, at the start, you (correctly) emphasized that we're talking about intelligence gathering. So please don't fall back to the motte of "I only meant that confessions couldn't be trusted", which you're threatening to do by bringing up the judicial system and "people admitting to things". Some posters did that in the last argument, too. I don't know how many times I can repeat that, duh, torture-extracted confessions aren't legitimate. But confessions and intelligence gathering are completely different things.

Torture being immoral is a fully sufficient explanation for it being purged from our systems. So your argument is worse than useless when it comes to effectiveness - because it actually raises the question of why Western intelligence agencies were still waterboarding people in the 2000s. Why would they keep doing something that's both immoral and ineffective? Shouldn't they have noticed?

When you have a prisoner who knows something important, there are lots of ways of applying pressure. Sometimes you can get by with compassion, negotiation, and so on, which is great. But the horrible fact is that pain has always been the most effective way to get someone to do what you want. There will be some people who will never take a deal, who will never repent, but will still break under torture and give you the information you want. Yes, if you have the wrong person they'll make something up. Even if you have the right person but they're holding out, they might feed you false information (which they might do in all other scenarios, too). Torture is a tool in your arsenal that may be the only way to produce that one address or name or password that you never would have gotten otherwise, but you'll still have to apply the other tools at your disposal too.

Sigh. The above paragraph is obvious and not insightful, and I feel silly having to spell it out. But hey, in some sense it's a good thing that there are people so sheltered that they can pretend pain doesn't work to get evil people what they want. It points to how nice a civilization we've built for ourselves, how absent cruelty ("barbarism", as you put it) is from most people's day-to-day existence.

Maybe IP can be justified because it brings value by incentivizing creation?

Um, yes? This is literally the entire and only reason IP exists, so the fact that you have it as one minor side point in your post suggests you've never actually thought seriously about this. A world without IP is a world without professional entertainment, software, or (non-academic) research. Capitalism doesn't deny you the free stuff you feel you richly deserve... it enables its existence in the first place.

I'm assuming you didn't watch the GPT-4 announcement video, where one of the demos featured it doing exactly that: reading the tax code, answering a technical question about it, then actually computing how much tax a couple owed. I imagine you'll still want to check its work, but (unless you want to argue the demo was faked) GPT-4 is significantly better than ChatGPT at math. Your intuition about the limits of AI is 4 months old, which in 2023-AI-timescale terms is basically forever. :)

The first thing mentioned in that article is that housing isn't being built because the government is actively getting in its way. Sure, a government deadlock will, sadly, not stop the regulators, but it'll (at least temporarily) stop lawmakers from tossing even more monkey wrenches into an already-completely-dysfunctional system. Also, "new rail systems won't get built" just sounds like the status quo to me...

I mean, I still vividly recall that during the long Obama government shutdown the only way they could actually get us hoi polloi to feel any pain was to actively shut down public parks (requiring more effort than doing nothing). When you're doing a performance review, and the answer to "so what do you do, exactly?" is "as long as you pay me I won't set fire to the building", it's time for that employee to go.

I'm on Apple's AI/ML team, but I can't really go into details.

I mostly agree with you, but I want to push back on your hyperbole.

First, I don't think doing RLHF on an LLM is anything like torture (an LLM doesn't have any kind of conscious mind, let alone the ability to feel pain, frustration, or boredom). I think you're probably not being serious when you say that, but the problem is there's a legitimate risk that at some point we WILL start committing AI atrocities (inflicting suffering on a model for a subjective eternity) without even knowing it. There may even be some people/companies who end up committing atrocities intentionally, because not everyone agrees that digital sentience has moral worth. Let's not muddy the waters by calling a thing we dislike (i.e. censorship) "torture".

Second, we should not wish a "I have no mouth and I must scream" outcome on anybody - and I really do mean anybody. Hitler himself doesn't come close to deserving a fate like that. It's (literally) unimaginable how much suffering someone could be subjected to in a sufficiently advanced technological future. It doesn't require Roko's Basilisk or even a rogue AI. What societal protections will we have in place to protect people if/when technology gets to the point where minds can be manipulated like code?

Sigh. And part of the problem is that this all sounds too much like sci-fi for anyone to take it seriously right now. Even I feel a little silly saying it. I just hope it keeps sounding silly throughout my lifetime.

Uh, you might be confusing income with personal wealth, or you have very strange standards. Having $1.6M doesn't make you particularly rich. Earning $1.6M per year definitely does. Unless you just think that schmoes like George W. Bush (net worth of ~$40M) aren't "rich or elite in a meaningful way".

I've lost pretty much all respect for Yudkowsky over the years as he's progressed from writing some fun power-fantasy-for-rationalists fiction to being basically a cult leader. People seem to credit him for inventing rationality and AI safety, and to both of those I can only say "huh?". He has arguably named a few known fallacies better than people who came before him, which isn't nothing, but it's sure not "inventing rationality". And in his execrable April Fool's post he actually, truly, seriously claimed to have come up with the idea for AI safety all on his own with no inputs, as if it wasn't a well-trodden sci-fi trope dating from before he was born! Good lord.

I'm embarrassed to admit, at this point, that I donated a reasonable amount of money to MIRI in the past. Why do we spend so much of our time giving resources and attention to a "rationalist" who doesn't even practice rationalism's most basic virtues - intellectual humility and making testable predictions? And now he's threatening to be a spokesman for the AI safety crowd in the mainstream press! If that happens, there's pretty much no upside. Normies may not understand instrumental goals, orthogonality, or mesaoptimizers, but they sure do know how to ignore the frothy-mouthed madman yelling about the world ending from the street corner.

I'm perfectly willing to listen to an argument that AI safety is an important field that we are not treating seriously enough. I'm willing to listen to the argument of the people who signed the recent AI-pause letter, though I don't agree with them. But EY is at best just wasting our time with delusionally over-confident claims. I really hope rationality can outgrow (and start ignoring) him. (...am I being part of the problem by spending three paragraphs talking about him? Sigh.)

Using race and gender as the overriding factors feels icky to me as well.

Shouldn't it feel icky? It's open racism and sexism, no different than the old days of "XXX need not apply" job postings. Not to mention it would literally be illegal for a private company to hire this way. What's weird to me is that Dem elites are so immersed in identity politics that this doesn't feel icky to any of them.

So, I don't know how pleasing you'll find this answer, but the burden of proof is on the models to show their efficacy. A lot of the things you mentioned were very difficult things to do, but we know they work because we see that they work. You don't have to argue about whether Stockfish's chess model captures Truth with a capital T; you can just play 20 games with it, lose all 20, and see. (And of course plenty of things look difficult and ARE still difficult - we don't have cities on the moon yet!)

So, if we had a climate model that everyone could just rely on because its outputs were detailed and verifiably, reliably true, then sure, "this looks like it's a hard thing to do" wouldn't hold much weight. A property of good models is that it should be trivial for them to distinguish themselves from something making lucky guesses. But as far as I know, we don't have this. Instead, we use models to make 50-year predictions for a single hard-to-measure variable (global mean surface temperature) and then 5 years down the line we observe that we're still mostly within predicted error bars. This is not proof that the model represents anything close to Truth.

Now, I don't follow this too closely any more, and maybe there really is some great model that has many different and detailed outputs, with mean temperature predictions that are fairly accurate for different regions of the Earth and parts of the atmosphere and sea, and that properly predicts changes in cloud cover and albedo and humidity and ocean currents and etc. etc. If somebody had formally published accurate predictions for many of these things (NOT just backfitting to already-known data), then I'd believe we feeble humans actually had a good handle on the beast that is climate science. But I suspect this hasn't happened, since climate activists would be shouting it from the rooftops if it had.

Yeah, Keanu Reeves (John Wick) is 58, Vin Diesel (Fast X) is 55 and Tom Cruise (MI) is 60. These are fun action franchises, but where are the fun action franchises with up-and-comers who are 20-30? I sure hope Ezra Miller isn't representative of the future of Hollywood "stars"...

I very much agree with his assertion in the second article that analysts often try to avoid mentioning (or even thinking about) tradeoffs in political discussions, even that's almost always how the real world works. Being honest about tradeoffs is a good strategy for correctly comprehending the world, but not for "winning" arguments.

Somewhat related to the civil rights violations of prisoners, I remember the arguments about Guantanamo back in the War on Terror days. It was common to hear politicians and pundits - in full seriousness - make the claim that "torture doesn't work anyway." I hated the fact that, post-9/11, it was politically impossible to say "torture is against our values, so we won't do it even though this makes our anti-terror efforts less effective and costs lives." Despite the fact that (I suspect) most people would agree privately with this statement...

VERY strong disagree. You're so badly wrong on this that I half suspect that when the robots start knocking on your door to take you to the CPU mines, you'll still be arguing "but but but you haven't solved the Riemann Hypothesis yet!" Back in the distant past of, oh, the 2010s, we used to wonder if the insanely hard task of making an AI as smart as "your average Redditor" would be attainable by 2050. So that's definitely not the own you think it is.

We've spent decades talking to trained parrots and thinking that was the best we could hope for, and now we suddenly have programs with genuine, unfakeable human-level understanding of language. I've been using ChatGPT to help me with work, discussing bugs and code with it in plain English just like a fellow programmer. If that's not a "fundamental change", what in the world would qualify? The fact that there are still a few kinds of intellectual task left that it can't do doesn't make it less shocking that we're now in a post-Turing Test world.

I agree, when I worked at Google I remember their security measures being extremely well-thought-out - so much better than the lax approach most tech companies take. However, I DON'T trust their ideological capture. They won't abuse people's information by accident, but I will not be surprised if they start doing it on purpose to their outgroup. And they have the tools to do it en masse.

Wow, you're really doubling down on that link to a video of a bird fishing with bread. And in your mind, this is somehow comparable to holding a complex conversation and solving Advent of Code problems. I honestly don't know what to say to that.

Really, the only metric that I need is that ChatGPT makes me more productive in my job and personal projects. If you think that's "unreasonably low", well, I hope that our eventual AI Overlords can hope to meet your stringent requirements. The rest of the human race won't care.

This show is absolutely one of the greatest things the BBC ever created. But it's 40 years old, and I often wonder where the next generation's Yes, Minister is. I don't watch a lot of TV (I've seen some of The West Wing, none of Veep or House of Cards), but as far as I know no modern show is worthy of claiming its mantle. Why? Is this the sort of show that can only come from a no-longer-existent world of low BBC budgets, niche high-brow appeal, and writers' willingness to skewer everyone's sacred cow rather than push a one-sided agenda?

Huh? The primary selection criterion, stated clearly and up front by Newsom, was "is a black woman". All other considerations, including the unobjectionable non-icky one you just changed the subject to, were secondary.