domain:science.org
why is labor so expensive?
Because in a prosperous society people want a lot of money for their labor, because they have a lot of opportunities and everything costs more in a prosperous society.
Why is there the need to exploit interns?
Begging the question. You assume they did it because they needed to, yet somehow most other companies don't do this. Why did those other companies not need to?
The only body who can actually change laws in Congress.
Yes, and said body uses agencies and interns to provide them information, because they are old men whose major skill is campaigning. Hell, Republicans actively campaign on there being too many laws. An agency designed to find laws that are obsolete and clean up spaghetti laws sounds exactly in line with what they claim to want.
Not only did the bad things social conservatives predicted about gay marriage come to pass, a lot of the stuff social conservatives were jeered at (truly or falsely) for predicting about gay marriage came to pass. The gay marriage pie chart meme is over 15 years old now; since then the terrorists won in Afghanistan, schools have been teaching kids how to have gay sex, various plagues (monkeypox, COVID) have erupted (though no locusts or frogs), and we've got a war in Ukraine (OK this is weak sauce because the meme specified WWIII).
No comprehensive, up-to-date, source exists.
I'm a social conservative, and the new orthodox faith of the One, True, Catholic Church of Trans Rights is not convincing me to shift on that. All the former gay rights activism that successfully sold the line "if you're not gay, this will have no effect on your life" to the mainstream and the trans activism that piggy-backed on this ("why are those bigoted conservatives so obsessed with bathrooms? no trans person has ever said anything about bathrooms, it's all them!") couldn't maintain the facade.
One of the things that has struck me about the trans backlash, which I think is real, has been its unwillingness to extend the slightest charity to social conservatives qua social conservatives. To put it bluntly and perhaps uncharitably: if the social conservatives warned you that this would happen, and now it has happened, perhaps you ought to consider that there was something to their perspective.
So, for instance, I see worries that opposing such-and-such trans issues might overspill into opposing same-sex marriage. But social conservatives at the same said clearly that one of the issues with same-sex marriage was that it would undermine the gender binary. They were right, on facts. They have in fact, regularly been right on the facts. So now that the thing they warned would happen as a result of gay marriage has happened... shouldn't that make their judgement of gay marriage more credible, not less?
The thing is, the push for gay marriage included a number of predictive arguments that have since proven to be incorrect. "Gay marriage will have no effect on your life" was untrue. "Gay marriage is not a stepping stone to more radical activism" was untrue. "The normalisation of and acceptance of homosexuality will not lead more people to identify as homosexual" ( deployed in gotchas like "gay marriage won't make you turn gay, why do you care?") was untrue. I suppose you could quibble causation and correlation, but the course seems pretty intuitive. Yet I still see this quite determined hostility to re-evaluating.
More recent models, o1 onwards, have further training with the explicit intent of making them more agentic, while also making them more rigorous, such as Reinforcement Learning from Verified Reward.
Being agents doesn't come naturally to LLMs, it has to be beaten into them like training a cat to fetch or a human to enjoy small talk. Yet it can be beaten into them.
I'm not generally an AI dismisser, but this piece here is worth pausing on. From my experience, ChatGPT has become consistently worse for this effort. It has resulted in extrapolating ridiculous fluff and guesses at what might be desired in an 'active' agentic way. The more it tries to be 'actively helpful', the more obviously and woefully poorly it does at predicting next token / predicting next step.
It was at its worst with that one rolled back version, but it's still bad
I've talked about this, not at great length, but I've mentioned it, before- cheap labour is cheap for a reason. Interns are a vast improvement over people who'll work for $15/hr in a corporate setting(and BTW, the tenth percentile wage is just above $14/hr. American labour is just really expensive).
There's plenty of working class people who happily make $15-$22 hr. There's a reason they don't get better jobs eventually. Corporate interns generally know things like 'how to keep themselves on track to hit deadlines without constant supervision' and 'how to follow directions correctly without asking for fifteen clarifications every sentence'. These may not be specific skills, but working in a white collar office environment requires abilities like this. Yes, requiring a college degree for this work is excessive, that's why students(who don't yet have one) are doing it as an internship. No, there's not really a solution here(go ahead, name it- no, things like 'flying pigs will provide character references' and 'we'll just kick everyone out of highschool who isn't college material so a diploma counts for the same thing' don't count).
Perhaps it would've been more accurate of me to say "This is part of the reason why LLMs have such difficulty counting..."
But even if you configure your model to treat each individual character as its own token, it is still going to struggle with basic mathematical operations.
I'm curious what you mean by 'low-level', here. I've heard Obsidian described many ways, but I don't think I've heard 'low-level' before.
Good post. Interesting to see how your perspective intersects with the other critics of LLMs, like Gary Marcus’ consistently effective methods for getting the systems to spit out absurd output.
In my own experience, the actual current value of neural network systems (and thus LLMs) is fuzzy UIs or APIs. Traditional software relies on static algorithms that expect consistent and limited data which can be transformed in highly predictable ways. They don’t handle rougher data very well. LLMs, however, can make a stab at analyzing arbitrary human input and matching it to statistically likely output. It’s thus useful for querying for things where you don’t already know the keywords - like, say, asking which combination of shell utilities will perform as you desire. As people get more used to LLMs, I predict we will see them tuned more to specialized use cases in UI and less to “general” text, and suddenly become quite profitable for a focused little industry.
LLMs will be useful as a sort of image recognition for text. Image recognition is useful! But it is not especially intelligent.
Probably the morally correct number. But what is the militarily correct number? That is, the number that Israelis should expect to be alive after a successful campaign in which a functional, strongly anti-terrorism regime rules that territory.
I suspect the number approaches zero.
More options
Context Copy link