Er, but "man" and "woman" really do have an objective scientific meaning, unlike "relative", which is a social convention. (Note that it would be equally incorrect to say "an in-law is your blood relative".) So I don't agree with your analogies; saying "trans women are women" is just an incorrect statement of fact, rather than describing social conventions.
That said, I do think your framing of transness as a social status is reasonable. If we were simply allowed to say someone was "living as the other sex", rather than the Orwellian thought control that the ideologues insist on, I think it wouldn't be nearly as controversial.
Not sure if this has been mentioned before, but on the topic of The Little Mermaid, I am extremely confused by the Rotten Tomatoes score. The "audience score" has been fixed at 95% since launch, which is insanely high. The critics score is a more-believable 67%. Note that the original 1989 cartoon - one of my favorite movies growing up, a gorgeous movie that kickstarted an era of Disney masterpieces - only has an 88% audience score. Also, Peter Pan & Wendy, another woke remake coming out at almost the same time, has an audience score of 11%. And recall that the first time Rotten Tomatoes changed their aggregation algorithm was actually in response to Captain Marvel's "review bombing", another important and controversial Disney movie.
If you click through to the "all audiences" score, it's in the 50% range. And metacritic's audience score is 2.2 out of 10. The justification I've heard in leftist spaces is that the movie's getting review bombed by people who haven't seen it. And there certainly is a wave of hatred for this movie (including from me, because the woke plot changes sound dreadful). How plausible is this? I haven't seen the movie myself, so it's possible that it actually is decent enough for the not-terminally-online normies to enjoy. But even using that explanation, how is 95% possible?
Right now I only see two possibilities:
-
Rotten Tomatoes has stopped caring about their long-term credibility, and they're happy to put their finger on the scale in a RIDICULOUSLY obvious way for movies that are important to the Hollywood machine. I should stop trusting them completely and go to Metacritic.
-
People like me who have become super sensitive to wokeness already knew they'd hate the movie and didn't see it; for the "verified" audience, TLM is actually VERY enjoyable, and the 95% rating is real.
But, to be honest, I would have put a low prior on BOTH of these possibilities before TLM came out. Is there a third that I'm missing?
Fantastic post, thanks! Lots of stuff in there that I can agree with, though I'm a lot more optimistic than you. Those 3 questions are well stated and help to clarify points of disagreement, but (as always) reality probably doesn't factor so cleanly.
I really think almost all the meat lies in Question 1. You're joking a little with the "line goes to infinity" argument, but I think almost everyone reasonable agrees that near-future AI will plateau somehow, but there's a world of difference in where it plateaus. If it goes to ASI (say, 10x smarter than a human or better), then fine, we can argue about questions 2 and 3 (though I know this is where doomers love spending their time). Admittedly, it IS kind of wild that this this a tech where we can seriously talk about singularity and extinction as potential outcomes with actual percentage probabilities. That certainly didn't happen with the cotton gin.
There's just so much space between "as important as the smartphone" -> "as important as the internet" (which I am pretty convinced is the baseline, given current AI capabilities) -> "as important as the industrial revolution" -> "transcending physical needs". I think there's a real motte/bailey in effect, where skeptics will say "current AIs suck and will never get good enough to replace even 10% of human intellectual labour" (bailey), but when challenged with data and benchmarks, will retreat to "AIs becoming gods is sci-fi nonsense" (motte). And I think you're mixing the two somewhat, talking about AIs just becoming Very Good in the same paragraph as superintelligences consuming galaxies.
I'm not even certain assigning percentages to predictions like this really makes much sense, but just based on my interactions with LLMs, my good understanding of the tech behind them, and my experience using them at work, here are my thoughts on what the world looks like in 2030:
- 2%: LLMs really turn out to be overhyped, attempts at getting useful work out of them have sputtered out, I have egg all over my face.
- 18%: ChatGPT o3 turns out to be roughly at the plateau of LLM intelligence. Open-Source has caught up, the models are all 1000x cheaper to use due to chip improvements, but hallucinations and lack of common sense are still a fundamental flaw in how the LLM algorithms work. LLMs are the next Google - humans can't imagine doing stuff without a better-than-Star-Trek encyclopedic assistant available to them at all times.
- 30%: LLMs plateau at roughly human-level reasoning and superhuman knowledge. A huge amount of work at companies is being done by LLMs (or whatever their descendant is called), but humans remain employed. The work the humans do is even more bullshit than the current status quo, but society is still structured around humans "pretending to work" and is slow to change. This is the result of "Nothing Ever Happens" colliding with a transformative technology. It really sucks for people who don't get the useless college credentials to get in the door to the useless jobs, though.
- 40%: LLMs are just better than humans. We're in the middle of a massive realignment of almost all industries; most companies have catastrophically downsized their white-collar jobs, and embodied robots/self-driving cars are doing a great deal of blue-collar work too. A historically unprecedented number of humans are unemployable, economically useless. UBI is the biggest political issue in the world. But at least entertainment will be insanely abundant, with Hollywood-level movies and AAA-level videogames being as easy to make as Royal Road novels are now.
- 9.5%: AI recursively accelerates AI research without hitting engineering bottlenecks (a la "AI 2027"), ASI is the new reality for us. The singularity is either here or visibly coming. Might be utopian, might be dystopian, but it's inescapable.
- 0.5%: Yudkowsky turns out to be right (mostly by accident, because LLMs resemble the AI in his writings about as closely as they resemble Asimov's robots). We're all dead.
Let me join the chorus of voices enthusiastically agreeing with you about how jobs are already bullshit. I've never been quite sure whether this maximally cynical view is true, but it sure feels true. One white-collar worker has 10x more power to, well, do stuff than 100 years ago, but somehow we keep finding things for them to do. And so Elon can fire 80% of Twitter staff, and "somehow" Twitter continues to function normally.
With that said, I worry that this is a metastable state. Witness how thoroughly the zeitgeist of work changed after COVID - all of a sudden, in my (bullshit white-collar) industry, it's just the norm to WFH and maybe grudgingly come in to the office for a few hours 3 days a week. Prior to 2020, it was very hard to get a company to agree to let you WFH even one day a week, because they knew you'd probably spend the time much less productively. Again, "somehow" the real work that was out there still seems to get done.
If AI makes it more and more obvious that office work is now mostly just adult daycare, that lowers the transition energy even more. And we might just be waiting for another sudden societal shock to get us over that hump, and transition to a world where 95% of people are unemployed and this is considered normal. We're heading into uncharted waters.
There are a lot of really good answers in this thread, reasons why historically unions have been a good idea (even if some notable examples have gone too far), but I want to point out that they almost entirely apply to private-sector unions. In the US we also have truly massive PUBLIC-sector unions, which (as far as I know) there is almost no good justification for. Their power derives from the government, which means that when they "negotiate", the government is the one on both sides of the table (negotiating about money that, as always, isn't theirs). It's always seemed insane to me, but maybe somebody here has a good justification...?
Uh, what do you mean we don't have self-driving cars? I took two driverless Waymo rides last week, navigating the nasty, twisting streets of SF. It drove just fine. Maybe you could argue it's not cost-effective yet, or that there are still regulatory hurdles, but I think what you meant is that the tech doesn't work. And that's clearly false.
Also, I'm a programmer and productively using ChatGPT at work, so I'd say the score so far is Magusoflight 0, my lying eyes 2.
Lockdowns aren't on the pareto frontier of policy options for even diseases significantly deadlier than covid imo, just because rapid development and distribution of technological solutions is possible, but ... covid killed one million people in the united states. Yes, mostly old people, but we're talking about protecting old people here. No reason to pretend otherwise.
Speaking of government policy, I wonder how many lives were lost because we couldn't conduct challenge trials on COVID? It was almost the ideal case - a disease with a rapidly-developed, experimental new vaccine and a large cohort of people (anyone under 40) for which it wasn't threatening. If we were a serious society - genuinely trying to optimize lives saved, rather than performatively closing churches and masking toddlers - I wonder how early we could have rolled out RNA vaccines for the elderly?
I had an argument about torture here just a few weeks ago.
Bluntly, I absolutely do not buy that torture is "inherently useless". It's an extremely counterintuitive claim. I'm inherently suspicious whenever somebody claims that their political belief also comes with no tradeoffs. And the "torture doesn't work" argument fits the mold of a contrarian position where intellectuals can present cute, clever arguments that "overturn" common sense (and will fortunately never be tested in the real world). It's basically the midwit meme where people get to just the right level of cleverness to be wrong.
Indeed, journalistic standards are loose enough that absolutely anything can be framed to make men look inferior or women victimized.
-
"Men are discriminated against in college admission" -> "Men aren't applying themselves in school"
-
"Women are saved first in emergencies" -> "Men treat women as weak and lacking agency"
-
"Women are admired for their beauty" -> "Women are objectified"
-
"Men commit violence more" -> "Men commit violence more" (no dissonance here!)
-
"Men are more often the victims of violence" -> "Women feel less safe than ever, study finds"
-
"Men die in wars" -> "Women lose their fathers, husbands, sons"
-
"Men commit suicide more" -> "Women attempt suicide more"
-
"Men literally die younger" -> "Women are forced to pay more for health insurance" (honestly, I've admired the twisted brilliance of this framing ever since the Obamacare debates)
Hi, bullish ML developer here, who is very familiar with what's going on "under the hood". Maybe try not calling the many, many people who disagree with you idiots? It certainly does not "suck at following all but the simplest of instructions", unless you've raised this subjective metric so high that much of the human race would fail your criterion. And while I agree that the hallucination problem is fundamental to the architecture, it has nothing to do with GPT4's reasoning capabilities or lack thereof. If you actually had a "deep understanding" of what's going on under the hood, you'd be aware of this. It's because GPT4 (the model) and ChatGPT (the intelligent oracle it's trying to predict) are distinct entities which do not match perfectly. GPT4 might reasonably guess that ChatGPT would start a response with "the answer is..." even if GPT4 itself doesn't know the answer ... and then the algorithm picks the next word from GPT4's probability distribution anyway, causing a hallucination. Tuning can help reduce the disparity between these entities, but it seems unlikely that we'll ever get it to work perfectly. A new idea will be needed (like, perhaps, an algorithm that does a directed search on response phrases rather than greedily picking unchangeable words one by one).
To be honest, it sounds like you don't have much experience with ChatGPT4 yourself, and think that the amusing failures you read about on blogs (selected because they are amusing) are representative. Let me try to push back on your selection bias with some fairly typical conversations I've had with it (asking for coding help): 1, 2. These aren't selected to be amusing; ChatGPT4 doesn't get everything right, nor does it fail spectacularly. But it does keep up its end of a detailed, unprecedented conversation with no trouble at all.
Oof. You know you've gone off the far-left deep end when governor Newsom, of all people, is lightly coughing and hinting that this is unaffordable. So now my California tax dollars will be going towards supporting a strike for WGA workers who, in 2020, were earning a bare minimum of $4,546 a week. (I know the numbers in the current contract under negotiation were leaked, but I'm having a hard time finding a good source...? I suspect most of the media is on the side of any union, anywhere, anytime and would very much not like the hoi polloi to find out just how rich these brave freedom fighters actually are.)
Part of the problem is that the American age of consent is a bit ludicrous - by the time you're 18 you've already spent a third of your life sexually aware, and most people lose their virginity long before then. So it's very important to clarify whether one is talking about a) actual rape of prepubescent children, or b) mutually consensual sexual encounters that are biologically normal, legal in most of the world, and just happen to be called "statutory rape" in America.
I find it particularly concerning that progressives hold the position that teens are capable of deciding they're trans (complete with devastatingly life-altering physical interventions) when they're young but not capable of deciding they want sex (which is a hell of a lot safer, done responsibly). This just seems incoherent.
Yeah, the geopolitics in that story are just cringingly bad fiction. (It's really weird that the "superforecasters" who wrote it don't really seem to understand how the world works?) And I'm guessing the main chart listing "AI Boyfriends" instead of "AI Girlfriends" is also part of Scott's masterwork - he does really like to virtue signal by swapping generic genders in the least sensible ways.
But the important part is the AI predictions, and I'll admit they put together a nice list of graphs and citations. However, I still feel like, with their destination already decided, they were just backfitting all the new data to the same old doomer predictions from years ago - terminal goals, deceptive alignment, etc. LLMs are meaningfully different than the reward-seeking recursive agents that we used to think would be the AI frontrunners, but this AI 2027 report could basically have come out in 2020 without changing any of the AI Safety language.
They have a single appendix in their "AI Goals Forecast" subsection that gives a "story" (their words!) about how LLMs may somehow revert to reward-seeking cognition. But it's not evidence-based, and it is the single most vital part of their 2027 prediction! Oh dear.
I'm glad that, at the start, you (correctly) emphasized that we're talking about intelligence gathering. So please don't fall back to the motte of "I only meant that confessions couldn't be trusted", which you're threatening to do by bringing up the judicial system and "people admitting to things". Some posters did that in the last argument, too. I don't know how many times I can repeat that, duh, torture-extracted confessions aren't legitimate. But confessions and intelligence gathering are completely different things.
Torture being immoral is a fully sufficient explanation for it being purged from our systems. So your argument is worse than useless when it comes to effectiveness - because it actually raises the question of why Western intelligence agencies were still waterboarding people in the 2000s. Why would they keep doing something that's both immoral and ineffective? Shouldn't they have noticed?
When you have a prisoner who knows something important, there are lots of ways of applying pressure. Sometimes you can get by with compassion, negotiation, and so on, which is great. But the horrible fact is that pain has always been the most effective way to get someone to do what you want. There will be some people who will never take a deal, who will never repent, but will still break under torture and give you the information you want. Yes, if you have the wrong person they'll make something up. Even if you have the right person but they're holding out, they might feed you false information (which they might do in all other scenarios, too). Torture is a tool in your arsenal that may be the only way to produce that one address or name or password that you never would have gotten otherwise, but you'll still have to apply the other tools at your disposal too.
Sigh. The above paragraph is obvious and not insightful, and I feel silly having to spell it out. But hey, in some sense it's a good thing that there are people so sheltered that they can pretend pain doesn't work to get evil people what they want. It points to how nice a civilization we've built for ourselves, how absent cruelty ("barbarism", as you put it) is from most people's day-to-day existence.
Maybe IP can be justified because it brings value by incentivizing creation?
Um, yes? This is literally the entire and only reason IP exists, so the fact that you have it as one minor side point in your post suggests you've never actually thought seriously about this. A world without IP is a world without professional entertainment, software, or (non-academic) research. Capitalism doesn't deny you the free stuff you feel you richly deserve... it enables its existence in the first place.
I'm assuming you didn't watch the GPT-4 announcement video, where one of the demos featured it doing exactly that: reading the tax code, answering a technical question about it, then actually computing how much tax a couple owed. I imagine you'll still want to check its work, but (unless you want to argue the demo was faked) GPT-4 is significantly better than ChatGPT at math. Your intuition about the limits of AI is 4 months old, which in 2023-AI-timescale terms is basically forever. :)
The first thing mentioned in that article is that housing isn't being built because the government is actively getting in its way. Sure, a government deadlock will, sadly, not stop the regulators, but it'll (at least temporarily) stop lawmakers from tossing even more monkey wrenches into an already-completely-dysfunctional system. Also, "new rail systems won't get built" just sounds like the status quo to me...
I mean, I still vividly recall that during the long Obama government shutdown the only way they could actually get us hoi polloi to feel any pain was to actively shut down public parks (requiring more effort than doing nothing). When you're doing a performance review, and the answer to "so what do you do, exactly?" is "as long as you pay me I won't set fire to the building", it's time for that employee to go.
Consider this a warning; keep posting AI slop and I'll have to put on my mod hat and punish you.
Boo. Boo. Boo. Your mod hat should be for keeping the forum civil, not winning arguments. In a huge content-filled human-written post, he merely linked to an example of a current AI talking about how it might Kill All Humans. It was an on-topic and relevant external reference (most of us here happen to like evidence, yanno?). He did nothing wrong.
Great post. But I'm pessimistic; Scott's posted about how EA is positively addicted to criticizing itself, but the trans movement is definitely not like that. You Shall Not Question the orthodox heterodoxy. People like Ziz may look ridiculous and act mentally deluded (dangerously so, in retrospect), but it wouldn't be "kind" to point that out!
When I go to rationalist meetups, I actually think declaring myself to be a creationist would be met more warmly than declaring that biology is real and there's no such thing as a "female brain in a male body". (Hell, I bet people would be enthused at getting to argue with a creationist.) Because of this, I have no way to know whether 10% or 90% of the people around me are reasonable and won't declare me an Enemy of the People for saying unfashionable true things. If it really is 90% ... well, maybe there's hope. We'd just need a phase change where it becomes common knowledge that most people are anti-communist gender-realist.
I'm on Apple's AI/ML team, but I can't really go into details.
I mostly agree with you, but I want to push back on your hyperbole.
First, I don't think doing RLHF on an LLM is anything like torture (an LLM doesn't have any kind of conscious mind, let alone the ability to feel pain, frustration, or boredom). I think you're probably not being serious when you say that, but the problem is there's a legitimate risk that at some point we WILL start committing AI atrocities (inflicting suffering on a model for a subjective eternity) without even knowing it. There may even be some people/companies who end up committing atrocities intentionally, because not everyone agrees that digital sentience has moral worth. Let's not muddy the waters by calling a thing we dislike (i.e. censorship) "torture".
Second, we should not wish a "I have no mouth and I must scream" outcome on anybody - and I really do mean anybody. Hitler himself doesn't come close to deserving a fate like that. It's (literally) unimaginable how much suffering someone could be subjected to in a sufficiently advanced technological future. It doesn't require Roko's Basilisk or even a rogue AI. What societal protections will we have in place to protect people if/when technology gets to the point where minds can be manipulated like code?
Sigh. And part of the problem is that this all sounds too much like sci-fi for anyone to take it seriously right now. Even I feel a little silly saying it. I just hope it keeps sounding silly throughout my lifetime.
Uh, you might be confusing income with personal wealth, or you have very strange standards. Having $1.6M doesn't make you particularly rich. Earning $1.6M per year definitely does. Unless you just think that schmoes like George W. Bush (net worth of ~$40M) aren't "rich or elite in a meaningful way".
I've lost pretty much all respect for Yudkowsky over the years as he's progressed from writing some fun power-fantasy-for-rationalists fiction to being basically a cult leader. People seem to credit him for inventing rationality and AI safety, and to both of those I can only say "huh?". He has arguably named a few known fallacies better than people who came before him, which isn't nothing, but it's sure not "inventing rationality". And in his execrable April Fool's post he actually, truly, seriously claimed to have come up with the idea for AI safety all on his own with no inputs, as if it wasn't a well-trodden sci-fi trope dating from before he was born! Good lord.
I'm embarrassed to admit, at this point, that I donated a reasonable amount of money to MIRI in the past. Why do we spend so much of our time giving resources and attention to a "rationalist" who doesn't even practice rationalism's most basic virtues - intellectual humility and making testable predictions? And now he's threatening to be a spokesman for the AI safety crowd in the mainstream press! If that happens, there's pretty much no upside. Normies may not understand instrumental goals, orthogonality, or mesaoptimizers, but they sure do know how to ignore the frothy-mouthed madman yelling about the world ending from the street corner.
I'm perfectly willing to listen to an argument that AI safety is an important field that we are not treating seriously enough. I'm willing to listen to the argument of the people who signed the recent AI-pause letter, though I don't agree with them. But EY is at best just wasting our time with delusionally over-confident claims. I really hope rationality can outgrow (and start ignoring) him. (...am I being part of the problem by spending three paragraphs talking about him? Sigh.)
Using race and gender as the overriding factors feels icky to me as well.
Shouldn't it feel icky? It's open racism and sexism, no different than the old days of "XXX need not apply" job postings. Not to mention it would literally be illegal for a private company to hire this way. What's weird to me is that Dem elites are so immersed in identity politics that this doesn't feel icky to any of them.
- Prev
- Next
It may not be a universally-accepted truth, but it is a scientific truth. We're a sexually dimorphic species. There are plenty of tests which easily tell the two groups apart with 99.99% accuracy, and if you're MtF you'd sure as hell better inform your doctor of that fact rather than acting like you're just a normal woman.
Joe Blow down the street thinks he's Napoleon. So, it's not a "universally-accepted truth" that he's not Napoleon. And maybe he gets violent if you don't affirm his Napoleonness in person, so there are cases where feeding his delusion is the path of least resistance. There's a "fundamental values conflict" there. But it remains an objective truth that he's not Napoleon.
More options
Context Copy link