This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
I was browsing through the news today and I found an interesting article about the current state of AI for corporate productivity.
MIT report: 95% of generative AI pilots at companies are failing
There seems to have been a feeling over the last few years that generative AI was going to gut white collar jobs the same way that offshoring gutted blue collar jobs in the 1980s and 90s, and that it was going to happen any day now.
If this study is trustworthy, the promise of AI appears to be less concrete and less imminent than many would hope or fear.
I've been thinking about why that might be, and I've reached three non-exclusive but somewhat unrelated thoughts.
The first is that Gartner hype cycle is real. With almost every new technology, investors tend to think that every sigmoid curve is an exponential curve that will asymptotically approach infinity. Few actually are. Are we reaching the point where the practical gains available in each iteration our current models are beginning to bottom out? I'm not deeply plugged in to the industry, nor the research, nor the subculture, but it seems like the substantive value increase per watt is rapidly diminishing. If that's true, and there aren't any efficiency improvements hiding around the next corner, it seems like we may be entering the through of disillusionment soon.
The other thought that occurs to me is that people seem to be absolutely astounded by the capabilities of LLMs and similar technology.
Caveat: My own experience with LLMs is that it's like talking to a personable schizophrenic from a parallel earth, so take my ramblings with a grain of salt.
It almost seems like LLMs exist in an area similar to very early claims of humanoid automata, like the mechanical Turk. It can do things that seem human, and as a result, we naturally and unconsciously ascribe other human capabilities to them while downplaying their limits. Eventually, the discrepancy grows to great - usually when somebody notices the cost.
On the third hand, maybe it is a good technology and 95% of companies just don't know how to use it?
Does anyone have any evidence that might lend weight to any of these thoughts, or discredit them?
Having poked at ChatGPT a bit, I'm not particularly surprised. If I think of a job it could potentially do that I understand, like graphic designer, Chat GPT (the only LLM/diffusion router I've personally tried) is about as good as a drunk college student, but much, much faster. There are some use cases for that -- the sort of project that's basically fake and nobody actually cares about or gets any value out of, but someone said it should be done. "I'll have GPT do that" basically means that it's considered meaningless drivel no matter who does it.
I suppose at some point it'll be able to make materials not only quickly, but also well -- but that day is not today.
"I would like an illustration for my fanfiction/roleplaying character. No, I'm not hiring an artist--I'm doing this for free, after all."
Sure. That's in the drunk college student, but way way faster realm. Nice to have, provides consumer surplus at free tier or $20/month, but probably not $200/month.
More options
Context Copy link
More options
Context Copy link
How about proofreading a long document? You can get LLMs to go through page by page and check for errors like sate instead of state, pubic instead of public, dependent vs dependant...
That has to be most boring and obvious application. There are heaps more.
Or how about making making cartoons? These aren't too bad: https://x.com/emollick/status/1920700991298572682
Word processors already look for typos that are actual words, but don't make sense in the current context, without applying AI. More and better autocorrect is about in line with the original thesis -- they're good at spreadsheet scale tasks, which is useful but not a huge amount of a given person's job. I'm not completely sure what professional editors do, but I think it's probably a bit deeper than looking for typos.
Perhaps I was too flippant with the 'There are heaps more' applications for AI. I get this newsletter from alexander kruel almost daily where he gives a tonne of links about what people are using AI for. For example:
Interviewing people in the Phillipines (better than humans apparently). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5395709
62% of coders in this survey are using it: https://survey.stackoverflow.co/2024/ai
76% of doctors are using it: https://www.fiercehealthcare.com/special-reports/some-doctors-are-using-public-generative-ai-tools-chatgpt-clinical-decisions-it
It's thought that the US govt might've decided what tariffs to impose via AI: https://www.newsweek.com/donald-trump-tariffs-chatgpt-2055203
It goes on and on and on...
I personally used it for proofreading and indeed it can't do all of an editor's job. Editors do lots of highly visual tasks managing how words fit on the page in ways that AI isn't so good at. But it can do some of an editor's job. It can do much of a cartoonist's job (Ben Garrison is in the clear for now with his ultra-wordy cartoons?). I think it's more than fast drunk college student and more than meaningless drivel.
More options
Context Copy link
More options
Context Copy link
Spellcheckers and grammar checkers have been a thing for ages in word processors, without throwing massive amounts of compute at it.
And none of them will fix errors like 'pubic law'. It won't notice when 'losses of profits' should be 'losses or profits'. It won't call out a date of 20008.
It does work but I think to do proper proofreading on an important document, you're going to need to supervise it, feed it your house style etc, and then check all its suggestions, or have someone competent who understands the subject matter do the same. Then you'll probably need to feed all the changes manually into InDesign (an LLM might be integrated into Adobe suite to be fair, I haven't used it lately).
By the time you've done that, maybe you'll have saved some time but I don't see it as that big a deal.
More options
Context Copy link
If you put the sentence "Ohio law states that you may not loiter less than three get away from a pubic building." into Google docs, it will correct "get" to "feet", and "pubic" to "public". This has been the case for around 15 years.
OK, how about losses or profits? Or 20008? I cited pubic law because it's funny, the other two are actually real examples from what I was getting it to do.
I highly doubt Google docs could do tasks that require contextual understanding without some kind of LLM.
More options
Context Copy link
More options
Context Copy link
Sure it is incrementally better on what we already have. The problem I'm trying to illuminate with it is that is the compute worth the provided value? It is hardly taking away a job from anyone doing the proof reading, it is an improved version of what we already have.
More options
Context Copy link
More options
Context Copy link
Software spell check (on early computers at least) required a cute algorithm --- the Bloom filter --- to work reasonably efficiently. Actually checking each typed word against the whole dictionary wasn't (and likely isn't) practical, but a statistical guess of correctness is good enough.
Sir, you were in a coma and woke up in the future.
Checking the inclusion of an element in a hashtable is a constant-time operation, or at least constant-ish -- you still need to compare the elements so it's gonna be proportional to the size of the largest one. So the limiting factor here is memory. I suspect keeping a dictionary resident in RAM on a home PC shouldn't have been a big deal for at least 25 years if not more.
I think there should be an even longer period where it would be fine to keep the dictionary on disk and access it for every typed word, because no human could plausibly type fast enough to outpace the throughput of random reads from a hard disk. No idea how long into the past that era would stretch.
I still get surprised at how fast computers can do basic tasks.
A few weeks ago, I had to compare some entries in a .csv list to the filenames that were buried a few layers deep in some subfolders. It went through the thousands of items in an instant. I didn't even bother saving the output because I could regenerate it as fast as doubleclicking a file (or faster if it has to do something silly like opening Microsoft Word).
More options
Context Copy link
Heh, I'm old enough to have owned a pocket electronic spell checker at one point. The hash table seems the right way to do it these days, but it will take up more memory (640K shakes fist at cloud). And sometimes you do want to scan faster than the user types, like opening a new large file.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link