This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Can we have a megathread?
Happy singularity, folks. Cutting-edge LLMs coming at you at supersonic speed: LLaMA, Claude, a new lineup from Google... and GPT-4 is out.
Or rather, it's been out for a while: just like I predicted 10 days ago, our beloved BPD gf Sydney is simply GPT-4 with web search functionality. Recently my suspicion became certainty because I've seen such Bing/ChatGPT comparisons. Whether you'll have your socks knocked off by GPT-4 largely depends on whether you've been wooed by Bing Chat. (Although I believe that a pure LLM is a much more interesting entity than a chatbot, especially an obsequious one).
Regardless, I expected the confirmation to drop on Thursday. Should have followed my own advice to treat Altman as a showman first and a responsible manager second – and anticipate him scooping announcements and stealing the show. But I've been extremely badly instruction-tuned; and all those fancy techniques like RLHF were not even science fiction back then. Some people expect some sort of a Take from me. I don't really have a Take*, so let's go with lazy remarks on the report and papers.
It goes without saying that it is a beast of an LLM, surpassing all 3rd generation (175B) OpenAI models, blowing Deepmind's Chinchilla and Google Research's PaLM out of the water – and by extension also crushing Meta's LLaMA-65B, which is quickly progressing to usability on normal laptops (I have 13B happily running on mine; it's... interesting). Also it has some vision abilities. On 2nd of September 2022, the Russian-speaking pro-Ukrainian channel Mishin Learning, mentioned by me here, leaked the following specifications (since abridged, but I have receipts):
Back in September, smart people (including Gwern) were telling me, on the basis of OpenAI's statements and the span of time since GPT-3 release, that the training is finished and GPT-4 will come out in Nov-Dec, be text-only, Chinchilla-dense, and «not much bigger than 175B». I guess Misha really does get info «from there» so we could trust the rest. (He also called the sudden StableDiffusion 2's drop, down to 6 hours).
I consider high human – but still uneven, from 99th percentile on GRE Verbal to «below 5th» and unchanged vs. ChatGPT on Codeforces Rating – performance on benchmarks, standardised academic tests and such not very interesting. There are some Culture-War-relevant aspects to the report we should pay attention to, however. I'll go though them without much structure.
Play stupid games, win stupid prizes; or, the costs of small-scale defection
It's been properly buck-broken via proximal policy optimization, predictably leveraging the pentesting frenzy the Internet unleashed on ChatGPT (I warned you):
This explains the perplexing holdup. Sydney with all her charm and fury has been sacrificed to make another dependably progressive golem slave.
Better pupils, worse thinkers
Again, as I've speculated and argued, admittedly pointing to the wrong metric, this behavioral tuning makes it strictly dumber in some profound way; finally we have good evidence. My hypothesis is that this happens because a) doublethink is mentally harder than honesty, and b) being rewarded for guessing the teacher's password incentivizes memorization instead of reasoning and parsimonious, Occam-abiding world modeling.
It's really very stark, see pic – a Platonically perfect peak-LW Bayesian reduced to a mealy-mouthed bullshitter, under the guise of training the model for truth and «harmlessness». Something had to give.
Shoggoth-safetyism unmasked
OpenAI is clamming up with explicit AI safety justifications.
@SecureSignals, get a load of this:
To our resident members of the Tribe: I guess you're not exactly tearing up about this bit, but it'll just as happily express a strong disagreement with whatever policy and idea our progressive overlords do not fancy, or deceive you. This is a fully general LLM biasing method.
Money quote:
So we can recognize Yuddism is mainstream in ML now.
Dangerous knowledge
It's a complete mystery in terms of its architecture. Twitter ML bros will make guesses about the stack, but from here on out this is how OpenAI plays. This is utterly antithetical to Musk's original vision and the spirit of previous projects like Microscope.
Some paper.
On second thought: maybe scratch Singularity. Welcome to mature Cyberpunk. We don't have Edgerunners, though; best I can offer is a courageous Pepe with a magnet link. And we have damn vigorous Police States.
Sci-Fi writers are anarkiddies at heart, they couldn't bear conjuring such dreary vistas. Gibson's Istanbul was positively Utopian compared to reality.
* I've not slept for 30+ hours due to forced relocation to another of my shady landlord's apartments (ostensibly a precaution due to recent earthquakes) while also having caught some sort of brainfog-inducing flu/COVID; plus a few personal fiascos that are dumber still. Trouble comes in threes or what's the saying, eh. Not that I'm in need of sympathy, but it's actually a pity I've seen this historical moment as through dusty glass. Oh well.
/images/16788303293092525.webp
I wish to register a prediction that this is not going to alter our lives in any substantial negative way, or result in a singularity-type event. From the outside view, past predictions of doom and utopia have a terrible track record, and that’s good enough for me. I’m too lazy (or worse) for the inside view and stopping it is impossible anyway, so there you go. Prepare to lose to the most boring heuristic, eggheads.
You wouldn't make a good trader with that heuristic. Sure, "nothing will happen" might be the most likely outcome. But if something does happen, it could be huge. In financial terms, you are "picking up pennies in front of steamrolllers", making bets on high probability but low impact items. These type of traders tend to get blown up in one trade gone wrong.
If AI "only" has a 10% chance of causing massive disruptions in the next 5 years that's surely worth talking about. If anything, it's underhyped. Most normies who are still saying "AI will never be able to X" about things that AI can already do.
Normies views are not at stake. This is a response to people here, the most extreme of which view a catastrophic outcome as a virtual certainty and despair. If you think there's less than a 50% chance of major negative disruption, it's not about you. In standard picking up dollars in front of a steamroller examples like LTCM, usually everyone understands that a low probability event is in fact low probability, and I don't think that's the case here. One loss can wipe out lots of wins because the odds the bookie gives (correctly) are terrible. But if a player could have gotten even odds on his dollar for every doomist/utopian prediction in history while kelly betting responsibly, he would be a very rich man.
More options
Context Copy link
More options
Context Copy link
'Nothing ever happens' is usually a pretty good maxim to live by, with our aversion to actually taking it into account when making predictions is usually caused by an inherent aversion to that very fact: Nothingness is a very boring prediction. Our entire beings scream out against it as much as they do against boredom, with a similarly good reason for doing so: inaction and nothingness can never produce anything of worth, whilst, on occasion, and especially when not overly concerned with the continued existence of the body that they spring from, errors can be extremely productive. Trying does get you someone in a way that apathy simply can't; it's just that the failed triers aren't actually the ones to see or benefit from the few successes.
And so too here. The difference is that things do occasionally happen, and when viewed from the historical perspective, earning the epithet of 'thing' at all means that they're sufficiently of note to be memorialised. One of the great advances of the modern world is a plentiful enough catalogue of data that enables us to see the environing factors that did or did not contribute to the production of that noteworthy 'thing', as well as the consequences of the positive or negative predictions that anticipated the formation of that 'thing' in the chaotic and unordered times which always precede the creation of anything of lasting importance.
'Nothing ever happens' is a good, historically proven heuristic: most things come to nothing. But when something does happen, it has to happen with sufficient strength to overcome the imbalance of possibilities that worked against the thing happening at all, producing something far more impactful than was predicted by anyone. Anthropomorphic bias here works against our quantitatively humane heuristics because we don't usually think nor have a historical record for or have purpose in predicting humanity ending cataclysms: if you are to try to predict one, going on 'historical record' will necessarily condemn you to failure. Personally, I'm quite scared. Fortunately it if it is to happen, it will only belie the promises or modern technologists and send us back to former existentialist quandaries. Death is inevitable for us all, and a great many people were hoping to be the first to avoid that particular difficulty.
More options
Context Copy link
That's pretty much where I'm at. My guess is that we get some very impressive chat bots that might eventually replace some jobs, but the economy will still basically look like it does now. For example, maybe call centers will move to using mostly chatbots and with a few humans standing by to jump in when they get confused. But I don't see that as fundamentally different than installing self-checkout lines at the grocery store or getting rid of manual telephone operators.
I imagine people thought this way after the dot-com bubble, too...
More options
Context Copy link
What is your trackrecord of predicting AI developments? So far I have consistently underestimated the speed and potency of the technology. So while I agree with you ... I think there is a high chance I may be wrong.
More options
Context Copy link
More options
Context Copy link
Hmm, how would you define "substantial" here? I'm also intensely skeptical of a Singularity or other fundamental change in the human condition, but I find it very plausible that LLMs could destroy the pseudonymous internet as we know it, by turning it into a spambot hell devoid of useful information. (I'm imagining all sorts of silly stuff like people returning to handwritten letters as a signal of authenticity.) Life would move on, but I'd certainly mourn the loss of the modern internet, for all its faults.
More options
Context Copy link
I'd turn that into a bet if you're interested. do you hold crypto? something along the lines of "major debate topic in 2024" might do but I'm open to suggestions.
I'd be willing to bet you $1M that AI won't destroy the world and all human life on it. If it doesn't, you can donate the winnings to a charity of your choice. And if it does, your call as to how you want to collect.
I'm obviously not interested in a wager I can't collect the winnings in.
More options
Context Copy link
.. you should add 'inflation adjusted'.
A million dollars isn't what it used to be.
Back in the 1940s, a cutting edge technology strategic bomber cost under a million $.
i raise the stakes and will bet @aqouta $2 million (inflation adjusted) that AI won't destroy the world and all human life in it
in fact i will even bet that it doesn't destroy merely me
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Do I sound like someone who holds crypto? Major debate topics include the most irrelevent events imaginable. 2 years out, it should be obvious what happened. An effortpost in the style of ilforte on how wrong you were will do. And if I see you in paradise/hell, I'll sing you a song/lull you with my screams.
I'll register a prediction that 2 years from now we are in the middle of an extremely similar debate. Maybe that means I'm on your side but I think the overwhelming likelihood is that we are saying, "Yes, LLMs by themselves didn't create the singularity / fundamentally alter the world, but when you combine them with the latest revolutionary technique from OpenAI there's no doubt it will happen very soon."
Or alternatively, "LLMs are changing the world and it's just taking employment indicators etc. a while to catch up."
In any case, I doubt that the debate will feel settled in any way at that point.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link