This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
OpenAI To Become a For-Profit Company
You'll notice that the link is to a hackernews thread. I did that intentionally because I think some of the points raised there get to issues deeper than "hurr durr, Elon got burnt" or whatever.
Some points to consider:
It is hard to not see this as a deliberate business-model hack. Start as a research oriented non-profit so you can more easily acquire data, perhaps investors / funders, and a more favorable public imagine. Sam Altman spent a bunch of time on Capitol Hill last year and seemed to move with greater ease because of the whole "benefit to humanity" angle. Then, once you have acquired a bunch of market share this way, flip the money switch on. Also, there are a bunch of tax incentives for non-profits that make it easier to run in the early startup phase.
I think this can be seen as a milestone for VC hype. The trope for VC investors is that they see every investment as "changing the world," but it's mostly a weird status-signaling mechanism. In reality, they're care about the money, but also care about looking like they're being altruistic or, at least, oriented towards vague concepts of "change for the better." OpenAI was literally pitched as addressing an existential question for humanity. I guess they fixed AI alignment in the past week or something and now it's time, again, to flip the money switch. How much of VC is now totally divorced from real business fundamentals and is only about weird idea trading? Sure, it's always been like that to some extent, but I feel like the whole VC ecosystem is turning into a battle of posts on the LessWrong forums.
How much of this is FTX-style nonsense, but without outright fraud. Altman gives me similar vibes as SBF with a little less bad-hygiene-autism. He probably smells nice, but is still weird as fuck. We know he was fired and rehired at OpenAI. A bunch (all?) of the cofounders have jumped shipped recently. I don't necessarily see Enron/FTX/Theranos levels of plain lying, but how much of this is a venture funding house of cards that ends with a 99% loss and a partial IP sale to Google or something.
For better or worse (probably worse), these are the people to whom we have entrusted the future of our civilization and likely our species. Nobody cares to stop them or to challenge them in any serious way (even Musk has decided as of late that if he can’t stop them, he’ll join them).
The only thing for it is to hope that they fail spectacularly in a limited way that kills fewer than hundreds of millions of people, and which results in some new oversight, before everything goes even more spectacularly wrong. Oh well.
The only danger AI, in it's current implementation, has is the risk that morons will mistake it as actually being useful and rely on the bullshit it spits out. Yes, it's impressive. But only insofar as it can summarize information that's otherwise easily available. One of the reasons my Pittsburgh posts have been taking as long as they have is that I'll go down a rabbit hole about an ongoing news story from 25 years ago that I can't quite remember the details of and spend a while trying to dig up old newspaper articles so I have my facts straight and reach the appropriate conclusions. I initially thought that AI would help me with this, since all the relevant information is on the internet and discoverable with some effort, but everything it gave me was either too vague to be useful or factually incorrect. If it can't summarize newspaper articles that don't have associated Wikipedia entries then I'm not too worried about it. I'd have much better luck going to the Pennsylvania room at the Carnegie Library and asking the reference librarian for the envelope with the categorized newspaper clippings that they still collect for this purpose.
I beg you to consider the possibility that progress in AI development will continue. The doomers are worried about future models, not current ones.
I have considered it, but that's just science fiction at this point. I'm only going to evaluate the implications of Open AI being a private company based on products they actually have, which, as far as I'm aware, boil down to two things: LLMs and image generators. The company touts the ability of its LLMs based on arbitrary benchmarks that don't say anything about its ability to solve real-world problems; as a lawyer, nothing I doing in my everyday life remotely resembles answering bar exam questions. Every time I've asked AI to do something where I'm not just fooling around and want an answer that won't involve a ton of leg work it's come up woefully short, and this hasn't changed, despite so-called "revolutionary" advancements. For example, I was trying to get a ballpark estimate on some statistic where there wasn't explicitly published data that would involve looking at related data, making certain assumptions, and applying a statistical model to interpolate what I was looking for. And all I got was that the AI refused to do it because the result would suffer from inaccuracies. After fighting with it, I finally got it to spit out a number, but it didn't tell me how it arrived at that number. This is the kind of thing that AI should be able to do, but it doesn't. If the data I was looking for were collected and published, then I'm confident that it would have given it to me, but I'm not that impressed by technology that can spit out numbers I could have easily looked up on my own.
The whole premise behind science fiction is that it might actually happen as technology advances. Space travel and colonizing other planets is physically possible, and will likely happen sometime in the next million years if we don't all blow up first. The models are now much better at both writing and college mathematics than the average human. They're not there yet, but they're clearly advancing, and I'm not sure how you can think it's not plausibly they pass us in the next hundred or so years?
More options
Context Copy link
It seems like you have not, in fact, considered the possibility of models improving. Is this the meme where some people literally can't evaluate hypotheticals? Again, doomers are worried about future, better models. What would you be worried about if you found out that models had been made that can do your job, and all other jobs, better than you?
I certainly have the ability to evaluate hypotheticals. Where I get off the train is when people treat these hypotheticals as though they're short-term inevitabilities. You can take any technology you want to and talk about how improvements mean we'll have some kind of society-disruping change in the next few decades that we have to prepare for, but that doesn't mean it will happen, and it doesn't mean we should invest significant resources into dealing with the hypothetical disruption caused by non-existent technology. The best, most recent example is self-driving cars. In 2016 it seemed like we were tantalizingly close to a world where self-driving cars were commonplace. I remember people arguing that young children probably wouldn't ever have driver's licenses because autonomous vehicles would completely dominate the roads by the time they were old enough to drive. Now here we are, almost a decade later, and this reality seems further away than it did in 2016. The promised improvements never came, high profile crashes sapped consumer confidence, and the big players either pulled out of the market or scaled back considerably. Eight years later we have yet to see a single consumer product that promises a fully autonomous experience to the point where you can sleep or read the paper while driving. There are a few hire car services that offer autonomous options, but these are almost novelties at this point; their limitations are well documented, and they're only used by people who don't actually care about reaching their destination.
In 2015 there was some local primary candidate who was running on a platform of putting rules in place to help with the transition to autonomous heavy trucking. These days, it would seem absurd for a politician to be investing so much energy into such a concern. Yes, you have to consider hypotheticals. But those come with any new piece of technology. The problem I have is when every incremental advancement treats these hypotheticals as though they were inevitabilities.
I'm a lawyer, and people here have repeatedly said that LLMs were going to make my job obsolete within the next few years. I doubt these people have any idea what lawyers actually do, because I can't think of a single task that AI could replace.
You can order a self-driving taxi in SF right now, though.
More options
Context Copy link
I agree it's not a foregone conclusion, I guess I'm hoping you'll either give an argument why you think it's unlikely, even though tens of billions and lots of top talent are being poured into it, or actually consider the hypothetical.
Even if it worked??
More options
Context Copy link
So they're here? Baidu has been producing and selling robotaxis for years now, they don't even have a steering wheel. People were even complaining the other day when they got into a traffic jam (some wanting to leave and others arriving).
They've sold millions of rides, they clearly deliver people to their destinations.
Drafting contracts? Translating legal text into human readable format? There are dozens of companies selling this stuff. Legal work is like writing in that it's enormously diverse, there are many writers who are hard to replace with machinery and others who have already lost their jobs.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link