site banner

Culture War Roundup for the week of April 10, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

14
Jump in the discussion.

No email address required.

Twitter: BBC objects to 'government funded media' label

"The BBC is, and always has been, independent. We are funded by the British public through the licence fee," it said.

When BBC News highlighted to the Twitter boss that the corporation was licence fee-funded, Mr Musk responded in an email, asking: "Is the Twitter label accurate?"

The level of the £159 ($197) annual licence fee - which is required by law to watch live TV broadcasts or live streaming in the UK - is set by the government, but paid for by individual UK households.

Collection of the the licence fee and enforcement of non-payment is carried out by private companies contracted by the corporation, not the UK government.

TV licence evasion itself is not an imprisonable offence. However, non-payment of a fine, following a criminal conviction, could lead to a risk of imprisonment - "a last resort" after other methods of enforcement have failed.

I'm sorry, I don't really see the point of the complaints. Or rather: I see a point, but it's not interesting or flattering.

The BBC license it's mandated by the government.

The fact that artists and defenders of the BBC itself argue attempts to remove the 'fee' will harm programming or is a deliberate attempt to cow the BBC also militates towards the conclusion that the worries implied by "state-affiliated" or "state-funded" apply - though I grant that it is a more refined arrangement than direct payment.

Shadow Culture Secretary Lucy Powell said: “The cat is out of the bag. The Prime Minister thinks those reporting on his rule breaking should pay consequences, whilst he gets off free.

"The Prime Minister and the Culture Secretary seem hell-bent on attacking this great British institution because they don’t like its journalism.”

So...the government not imposing a "fee" is an attack on an institution's functioning but we're supposed to act like it isn't a tax or the BBC isn't government funded?

So what reason does anyone (not benefiting from media branding) have to take any of this seriously? It seems to me that the real basis for complaint here is that BBC doesn't want to even theoretically be in the same bucket as Al Jazeera and RT. But it is precisely the media's fault that terms like "state media" are so badly received. Just as it is the media that marks certain dictators as "reformers" and others "strongmen" with "regimes" to aid its attempts to manufacture consent. They constructed this complex of Words That Hint At Things But Can't Be Called Out Cause They're Technically Correct.

So, because the media doesn't want to be marked by its own taboo-words and bad branding everyone is supposed to pretend that an entity funded by a government mandated license, whose supporters claim would fail without aforementioned government mandate everyone is supposed to ignore the correct labeling?

I thought the NPR label was a bit tenuous (assuming the 10% government funding stat I’ve seen is accurate), but the BBC is absolutely state-run media.

The whole “editorial independence” thing is a joke. Here’s a test: could the BBC run a piece calling the Queen a cunt without government consequences? If they couldn’t, then they aren’t editorially independent.

The whole “editorial independence” thing is a joke. Here’s a test: could the BBC run a piece calling the Queen a cunt without government consequences? If they couldn’t, then they aren’t editorially independent.

The point is that because of ofcom no one in UK can run a piece calling the Queen a cunt so BBC is not in anyway more or less restricted than the other media. They are not court appointed jester. And the charming lady didn't deserve it, unlike some of her sons and one of her grandsons.

This just means that there is no media outlet with “editorial independence” in the UK. Government funding + no editorial independence = state-run media.

And the charming lady didn’t deserve it

Of course not, just like Paramount Leader doesn’t deserve to be compared to Winnie The Pooh after doubling China’s GDP. Certainly we can agree that such common-sense regulations are necessary for the proper functioning of society.

Government funding + no editorial independence = state-run media.

Defining state-run media this way implies that the government funding is connected to the lack of editorial independence, which is false here.

The government funding implies a “don’t piss off the government too much” rule, at least more so than privately funded media orgs.

But the rule applies equally to the BBC and to privately funded media orgs. There is no "more so than" here.

And the charming lady didn't deserve it

Queen Elizabeth was an direct inheritor of stolen wealth who personally claimed fame, importance, and power for the sole reason that her ancestors were some of the worst war criminals who ever lived. For the reason she didn't renounce it all as soon as she could, she definitely deserves every insult.

The point is that because of ofcom no one in UK can run a piece calling the Queen a cunt

This makes matters even worse. I think we have an obligation to insult world leaders this morally compromised who protect themselves from criticism through censorious laws and policies. In that spirit, Queen Elizabeth is a dead, inbred, nepo-baby under the delusion that her family's legacy of warcrimes was something to be proud of and (example below) Xi Jinping is an incompetent Winnie-the-Pooh look-alike running the world's second-largest economy into the ground. Don't be afraid to say so!

  • -17

Queen Elizabeth was an direct inheritor of stolen wealth who personally claimed fame, importance, and power for the sole reason that her ancestors were some of the worst war criminals who ever lived.

Worst war criminals with better technology, manpower, institutions and education. Also it was conquered fair and square, not stolen. Vae Victis.

That's an unusual view. Might makes right is really not generally accepted as a good basis for morality. I guess it's best to link that instead of me badly summarizing a bunch of well-known arguments.

Edit: I think I understand better---you're saying that morality for countries/civilizations interacting is very different than that for people. I agree that this is probably true, but it would still be nice to justify why this particular difference exists. I think all the logic for might makes right being wrong for people transfers over? Most simply, it's better if societies could focus their energies on productive endeavors instead of zero-sum building of war-making potential so they can conquer and avoid being conquered.

Might makes right is really not generally accepted as a good basis for morality.

Sure it is; that's why human beings seek might in the first place.

Standard operating procedure for the mighty is to claim that business and honor march hand in hand; specifically, a realist practice that ceases to be profitable for the mightiest is a useful cassus belli (militarily or financially) against the dishonorable opponent who might still otherwise be in a position to extract some benefit from it.

For example, a country whose economy means men and women aren't equally productive only granting rights to the more capable gender in aggregate will find itself labelled as "immoral" by a mightier country seeking to hinder their development until they obey.

Might does not make right between people only because there are legal systems in place that codify rights on bases other than might. Such systems do not exist between states, or are ineffective.

If the word "fair" appies to conquering in your books then you might as well own stealing as well. After all, conquering is merely stealing while having enough manpower to do so openly.

Not quite. A state is monopoly of violence over certain territory. That's it. If a state conquers another state - than the first state had no business existing in the first place, so all property rights guaranteed by the conquered state are void

her ancestors were some of the worst war criminals who ever lived.

There's no need to insult Shaka Zulu, Julius Caesar, Temujin (aka Chinngis Khan), Attila, or Timur like that.

Yes, I know, and maybe about 50 others? Once you reach a certain level I'm not sure distinguishing who was absolutely worst is relevant and probably pretty impossible---how do you weight how many were affected, how brutal they were to each individual, how they were relative to others at the time, what they personally did vs. what others did on their orders, etc. This is why I said "some of the".

The British Empire was pretty exceptional in its scale and recency however.

Once you reach a certain level I'm not sure distinguishing who was absolutely worst is relevant and probably pretty impossible

The British, however, did not reach that level.

Given that all empires everywhere and forever have done evil, I'd sooner distinguish them by what good they've done. From my American perspective, the British Empire might be the all-time winner for its role in creating the amazingly prosperous anglophone nations and for their role in preserving historical treasures. When the Ottomans were busy destroying the Parthenon with carelessly stored gunpowder, the British were spending their treasure to preserve what was left of it. Sure, I can find plenty of bad things that the British Empire did, but the world is better for its existence.

I don't think the entirety of the British Empire can be laid at the feet of titular monarchs. Certainly George VI had no control over Churchill's actions with regard to the Bengal famine. And the conquest of India wasn't done by royal command, but instead by a corporation originally chartered just to trade. The British monarchs certainly didn't exercise the personal control that the people I named did.

For the reason she didn't renounce it all as soon as she could, she definitely deserves every insult. . . . I think we have an obligation to insult world leaders this morally compromised who protect themselves from criticism through censorious laws and policies.

It is permissible to argue this here.

In that spirit, Queen Elizabeth is...

What is not permissible here is actually delivering the invective. It's the epitome of pure heat, no light. You can argue that it should be permitted; it is not permitted here.

They would get OFCOM consequences as would any media organization trying to do that in the UK. They could run a story that the Queen was corrupt (though they would need to have sourced that impeccably). They probably can run a story saying the prime minister is incompetent for example.

A month ago the BBC suspended their most famous personality for tweeting something critical of the government's asylum policies.

That is because one of the "deals" for the BBC not to be seen as partisan is that its big personalities and newscasters should try to not be partisan on areas which are Labour vs Conservative. Whether they would have suspended him for being equally of critical of the oppositions positions is the question.

...and then unsuspended him, with an understanding that he was allowed to do it again if he wanted to. Also, the criticism involved calling the government Nazis, which some might see as unprofessional, especially for a sports commentator.

That's a bad test of independence. If most major news organizations ran such a piece the people responsible would be removed or disciplined just for being unprofessional.

Elon Musk can say whatever he wants on Twitter. There is nobody at the BBC who can say whatever they want, except the government.

No employee at any news organization can say 'whatever they want', they're subject to standards of relevance and professionalism. It isn't a good illustration of how the British Government influences the BBC to say that writers/pundits can't call the queen a cunt, because neither could writers at almost all small time American newspapers. Not because the British Government secretly controls them, but because they have voluntarily adopted professional standards that preclude it.

It’s not that each individual employee couldn’t do it. It’s that nobody at all could do it, not even in principle. Jeff Bezos could call Obama the N-word on the front page of the Washington Post tomorrow. Who could do that at the BBC? What group of people could do that? I suspect the only group that could is parliament.

If your point is that all media, even privately funded outlets, are like this in the UK, then I concede. In fact all UK outlets should have “government censored media” labels which link to an outline of the relevant laws and regulations.

I'm not trying to make a statement about the current state of British media, just calling out a poor argument. That the BBC doesn't do something (call the queen a cunt) that almost every media organization, even those not in the UK, voluntarily refrains from doing, does not provide much evidence of the level of editorial control that parliament exercises.

Any publicly traded media company would also have no individual who could call the queen a cunt without being punished by the board. If the entire board decided to call the queen a cunt on the front page they could probably be sued by share holders for damaging the company. Theoretically you could coordinate all the shareholders to approve, but that's implausible and I'm not sure why that should be a meaningful distinction between company's.

The point is 'ability to say whatever you want' doesn't practically exist at most major news companies and if you want to say the BBC as 'state owned media' is categorically different from a publicly traded American news company in a significant way you need a better example.

Elon Musk can say whatever he wants on Twitter.

Can he?

"Settlement Requires Musk to Step Down as Tesla’s Chairman; Tesla to Appoint Additional Independent Directors; Tesla and Musk Agree to Pay $40 Million in Penalties"

“At the same time, however, even Musk concedes that his free speech rights do not permit him to engage in speech that is or could ‘be considered fraudulent or otherwise violative of the securities laws.'”

He won the private securities fraud action but he still had to agree to a bunch of SEC requirements in his settlement right?

Literally, intentionally false statements can have consequences. Opinions should never be banned in a free society.

He doesn’t seem too scared of financial regulators at the moment. “Dogecoin jumps more than 30% after Musk changes Twitter logo to image of shiba inu”

So he can't say whatever he wants without suffering consequences?

I would not contest that Musk has a wider swathe of things he can say without consequence, than a BBC editor but it isn't "whatever he wants".

Let me rephrase. He can say whatever he thinks. If he has an opinion, he can tweet it. If he has an idea, he can tweet it.

I wouldn't write that either, but that's not because the government pays me to do so.

That's not a good test. A good test would be - if BBC did something the government really hated, could the government pull their financing? It doesn't have to be something obscene or revolting, just something that is very inconvenient for the government. If they could, then they exist at the sufferance of the government, and their "independence" is just a leash that is long enough for them not to feel it, but it's still there. If they could not, or it would be very hard (e.g. on the level of passing Constitutional amendment in the US) - then there's a real independence, even if technically financed by the governmental decree.

In theory, any government could retaliate against a sufficiently annoying journalist - even in countries with well established norms around the freedom of the press, there are many informal ways that a journalist might feel the displeasure of the government. But of course, there's a big difference between being arrested or censored for telling the wrong story, and simply facing a social or career penalty or losing access.

Personally, it's my observation that it's quite the opposite. In many circles one now faces a penalty for siding with the government, and journalists are taught to see themselves as agents of disruption, as adversaries to established power.

any government could retaliate against a sufficiently annoying journalist

Well, yes, the question is how easy it is to do that and what would be the consequences. When we see something like this:

The email, sent to correspondents at just after 6pm on the day lockdown was announced, was labelled: IMPORTANT ADVISORY – language re broadcast. “Hi all – D st are asking if we can avoid the word ‘lockdown’. I’m told the message will be that they want to keep pushing people to stay at home but they are not talking about enforcement at the moment,” it said.

can we still claim they are "independent"? If the government can tell (successfully) the journalists what words to use, is it not government-controlled speech?

In many circles one now faces a penalty for siding with the government, and journalists are taught to see themselves as agents of disruption

What circles are those? What I am seeing more and more nowdays is that the journalists are taught to be always the agents of The Swamp, and if The Swamp is by some freak accident of nature is temporarily not controlling the government, then disruption it is, until things return to the normal. Once they do, the journalists go back to serving as a branch of the government.

NPR receives half a billion per year from the federal government.

If that sum is irrelevant, they should stop taking it and become genuinely independent from government funding to avoid the appearance of a conflict of interest over an irrelevant sum of money. If it is relevant, then they are a government-funded media outlet.

NPR's entire total revenue is under $300 million: https://www.influencewatch.org/non-profit/national-public-radio-npr/

Did you really assume that every dollar the CPB spends goes to NPR?

But it is precisely the media's fault that terms like "state media" are so badly received. Just as it is the media that marks certain dictators as "reformers" and others "strongmen" with "regimes" to aid its attempts to manufacture consent. They constructed this complex of Words That Hint At Things.

So, because the media doesn't want to be marked by its own taboo-words and bad branding everyone is supposed to pretend that an entity funded by a government mandated license, whose supporters claim would fail without aforementioned government mandate everyone is supposed to ignore the correct labeling?

Is it the media's fault that terms like "state media" are badly received? Or is it the fact that a lot of state run media historically and observably tends to be biased towards the state and people can recognize that? The media doesn't have to tell me that a Ukraine government run news media organization and a Russian government run news media organization are likely to both need to be taken with a huge pinch of salt when reporting on the Ukraine war/special operation. Or that the news organization run by the Saudi or Iranian governments is unlikely to be taking stances the government does not like.

The media does not create authoritarian states. It certainly will (at least in the West) tend to downplay the authoritarian nature of states that are our allies and upsell the opposite. But that doesn't mean differences don't actually exist. The fact that a state run media arm should be regarded with suspicion (on reporting to do with anything to do with the government at least) is because historically that has been a problem. Our media didn't create that idea even if they over/undersell it depending on circumstance.

Having said that the Beeb is an interesting construct. Its funding mostly comes from the public by way of a government law for the License fee. However its existence is part of a Royal Charter which mandates its independence from the government itself. So is it accurate to say it is government funded? Kind of yes, kind of no. It doesn't get its money from the government but whether people have to pay it and how much IS determined by the government. In theory its supposed to be an independent reporter on the government and not biased towards either the government of the opposition.

In practice (and in my direct experience in interacting with the Beeb) it is kind of pro-establishment generally (which makes sense), with a slight social leftward lean and a slight conservative economic lean. Though it is I think slightly more positive towards whichever party happens to be in power at any given moment overall (which again makes sense from an incentive point of view). This is from dealing with the Beeb when working for both the Conservative and Labour parties.

If the point of the tag is to point out the level of possible bias then I don't think the BBC should get the same tag as a directly operated state organization. Though it's probably fine to get some sort of tag. I'll note Musk himself says he thinks the BBC is one of the least biased outlets for whatever that is worth. The BBC is big enough and important enough in the English speaking world that you could probably give it its own unique tag.

If the point of the tag is to "own the media" then sure keep it, it's just partisan sniping with little meaning in any case.

a slight social leftward lean and a slight conservative economic lean

This is... not how I would describe them. When did the Beeb last express a preference for lower taxes?

I'd say the BBC is pro-establishment as opposed to pro-government; much like the New York Times, it sees itself as the voice of civil society. Witness for example the recent furore over Gary Lineker's tweets: despite comparing the current government to Nazis, BBC management weren't able to keep Lineker off-air, or even get a promise to stop tweeting about it.

So calling them state-funded media does seem a little unfair; they're obviously not beholden to the executive. Deep-state-funded media, perhaps? If only there was a Twitter flag for that.

Having said that the Beeb is an interesting construct. Its funding mostly comes from the public by way of a government law for the License fee.

That's a synonym for "tax". The BBC isn't just government-funded, but has a special tax created just to keep it in existence.

And the independence of the BBC has been a great comedic punchline for decades now:

https://youtube.com/watch?v=B9tzoGFszog

Sure but its set up not to go to the government for disbursement so as to try to avoid the control issue.

Even when i worked for the governing party I could not just go to the BBC to squash stories.

Its fair to say there is some influence but in my sirect experience it isn't directly government controlled.

Would have made my life easier if it were.

Even when i worked for the governing party I could not just go to the BBC to squash stories.

Unless you were the Minister of Culture I don't know why you'd think you could. Of course the tea boy in 10 Downing Street doesn't get to boss people around as much as a cabinet minister would. YOUR failure to influence the Beeb doesn't mean everyone in the British government is similarly toothless.

No, but I would have known who could influence them, or my boss would.

There were ways to influence stories but the same way of influencing other outlets, build personal connections, offer a juicier story or exclusive etc.

But within government there is no standard path to control the Beeb's output.

Which isn't to say it doesn't have bias as mentioned above.

But within government there is no standard path to control the Beeb's output.

How certain are you that the status quo does not involve something analagous to The Twitter Files, with multiple govt agencies providing advisory concerns for subjects like pandemics, terrorism, mis/dis/malinformation, etc?

I think you have to agree these relationships must exist, indeed to control the Beeb’s output. But this happens for private media, too! I can’t prove, but strongly suspect, these relationships are stronger with the Beeb than Telegraph or Guardian.

I suspect the cynical explanation is that the British government doesn't have the same motive for putting its thumb on the scale like the American government would. They don't need to sell their people on any narrative in particular, because they don't really get themselves into wars (and pretty much most of the ones the UK has been in after WWII have been divisive at best) and they aren't tied up in global affairs like the US is.

So, I would say, yes, the BBC will probably be tilted in favor of the establishment, but there's no real pressure to be against said establishment.

There should be a term for the opposite of 'Gellman Amnesia' : "You don't realize how blind you're to brainwashing in a hivemind, until you're outside the hivemind."

The BBC seems 'independent' because it is occasionally critical of the British govt and will often give a platform to those who wish death to the western civilizational consensus. But, there is a '50 Stalins' aspect to that criticism and there are certain 'sacred cows' which can never so much as be mentioned, let alone criticized or analyzed. Al Jazeera appears similarly liberal, critical and rational as long as they are talking about things that do not relate to Qatar.

The BBC appears independent, because we who live within the hivemind of the west do not notice the absence of a type of criticism that we do not know exists.

But, there is a '50 Stalins' aspect to that criticism and there are certain 'sacred cows' which can never so much as be mentioned, let alone criticized or analyzed.

Care to give an actual example?

Rotherham is the only one I can think of off the top of my head that covered for the government, but Jimmy Saville and Martin Bashir's Lady Di interview were verboten topics for a long time at the BBC.

Having said that the Beeb is an interesting construct. Its funding mostly comes from the public by way of a government law for the License fee.

Just because you call it ship money a licence fee, doesn't mean it isn't a tax. The government impose it.

However its existence is part of a Royal Charter which mandates its independence from the government itself.

More importantly the BBC is perfectly willing to attack the government. But by "government" here, I mean the democratically elected institutions of the state. The BBC does however loyally represent (and is part of) the permanent state institutional structure.

So is it accurate to say it is government funded? Kind of yes, kind of no

Yes, and every kind of yes.

Is it the media's fault that terms like "state media" are badly received? Or is it the fact that a lot of state run media historically and observably tends to be biased towards the state and people can recognize that?

Yes? I mean, these are the same thing. State run media is media, their faults are media's faults. I think @bnfrmt hits closer to the mark with deep state run media, but the primary purpose of the label is to remind people to take what you learn from them with a grain of salt because they are beholden to interests other than the truth, and that they are propped up by the state - so I don't think it's inappropriate. Least biased is not not biased, and not biased is the only version we should tolerate. Anyone who claims otherwise is either brainwashed or bought.

It's not about 'owning the media' either, it is about reminding people that they are being lied to every single day by a bunch of clout chasing moral busybodies who would gladly sacrifice every prole on the planet for better standing in their incestuous community of mediocrities. That's all the BBC is these days, because that's what all journalism is these days.

But there was a time when it was just most journalism that was like that, and the only way we can get back there is by not letting the big hitters get away with anything. That includes stupid 'oh we jumbled things up, so this is no tax, even though we send enforcers around to harass you if we think you have watched television in the last week, and even though we gladly push a political agenda which protects certain interests of the state' excuses. Nope sorry, you are happy to accept the benefits of state backing, so you don't get to avoid the consequences.

To sum up, do not trust any media organisation bigger than your local newspaper. Ever.

Yes? I mean, these are the same thing. State run media is media

That is a different point than I think he was making though. His point was that "our" media labels state run media of other powers as bad. Not all media is equally "bad" even if that is only because some places use a lighter touch. And that is the case whether "our" media is using it as propaganda or not.

Don't trust any media, local newspaper or not is my view. But that doesn't mean I should trust an openly state run Chinese media over the BBC.

Reversed stupidity is not intelligence in other words.

To point out when I worked in politics it was much easier to have a story pulled or altered with smaller local outlets in exchange for exclusives or better stories. So local being more trustworthy is not something I would rely on.

Oh, I had the impression his point was that it's the media who came up with the label, so they don't get to throw a fit when it is appropriately applied to them. If they don't like being called state run media they can stop doing the things that make them state run media. Or alternatively, they are in the exact right position to rehabilitate the image of state run media. They refuse to do either, so the shoe fits.

Otherwise yeah, nobody gets blind trust. But I know there are some local paper editors out there who are dedicated to the truth, some out of penance for sins in the big leagues, some vindictively, some just to try to hold onto the light any way they can. Similar to how you say we can trust the BBC over CCP funded media, I think you can trust local papers more than the big hitters.

Why would you trust your local newspaper?

I’m being a little facetious—you potentially have access to its journalists in a way you wouldn’t for national concerns. But what does that really buy you? How much time and effort do you spend on keeping the locals in check? You’re not going to rally your town to ostracize the editor, not without a truly spectacular bias.

There are fewer people pulling on the local news, which is not the same as less total pull. It’s a lot cheaper to buy glowing reviews or softball coverage from a local outlet than from the New York Times. The result is that local journalists and editors may not be biased by Big Pharma or a wannabe President, but by a local employer, a motivated city councilman, or that bitch Annette, I can’t believe she took the kids, you can’t trust her.

Scrutiny by opposing interests is higher for large outlets, too. I remember thinking it was odd for commenters to grill Miami’s local news the other week. How many people do they really reach? Anything smaller will be even less visible.

I think it’s awfully hard to start from an adversarial basis, from game theory or realpolitik, and come up with reasons to trust. The difference in local and national news is that you might give locals the benefit of the doubt.

Agreed entirely. The only reason I excluded local newspapers from my sweeping declaration is because in most places I have lived you can actually hold them accountable for dishonesty, although it's often a bit like pulling teeth. But they are usually small enough to respect reader complaints, and a lot of them are where the autistically truth-seeking oldheads ended up after society decided we were too smart to bother with the truth, because local beats are like punishment to clout chasers.

I think the two tools in a low trust environment we can use to build trust are objectivity and accountability. We can't perfect either and to err is human, but anyone who puts a sincere effort into trying to be objective and holding themselves accountable for their mistakes deserves tentative trust I reckon.

“State media” is often used by the media as a shorthand for “media that’s controlled by a hostile state”, much like “regime” is used to indicate that a given nation is not aligned with western interests. It is a boo-term quite often as it counts as state control the things that hostile governments do, but not things we do. NPR is funded directly by the government through a grant system. If a program wants the grants and access to the NPR radio stations, it must produce shows that our government likes. If they reported on the news from too “radical” (read: anti-establishment) perspective, it won’t get funding or airtime. It’s basically patronage — I pay you on the basis of liking what you’ve done so far and I expect you to keep making things I like; and keep in mind that your patronage is up for renewal every year.

In essence, the very definition of “state media” is as a propaganda term meant to engender suspicion of that media’s reportage. And as such, it’s useless in most contexts to say “state run media should be viewed with suspicion” often reduces to “media that narrative makers declare are to be viewed with suspicion are to be viewed with suspicion.” Not to say that in some places reporting the wrong news story can mean free striped pajamas for the entire staff, but that the term itself isn’t used neutrally, and that it’s often inserted in reporting on other countries as a way to cast doubt on the data from hostile government funded media. Other supposedly free media outlets are not necessarily more evenhanded or reliable. If the Congo Times gets no funding from the government, but is run by a political party, is that better than the “state run” Congo Tribune that is run by the government?

Not to say that in some places reporting the wrong news story can mean free striped pajamas for the entire staff, but that the term itself isn’t used neutrally,

Right, but that doesn't mean they are actually wrong. Don't throw the baby out with the bathwater. I'd trust the BBC more in general than a Chinese state run media service in general because I know roughly how much power the British government has over the BBC and I know roughly what level of influence can be exerted when and under what circumstances.

Don't trust any media is not the same thing as all media being equally untrustworthy.

NPR is funded directly by the government through a grant system. If a program wants the grants and access to the NPR radio stations, it must produce shows that our government likes.

The proper conclusion from this is to be suspicious of NPR too. You're writing as if criticizing NPR for bias is outside the Overton window. Maybe it is for the left, but it certainly isn't among conservatives. (Maybe replacing "that our government likes" with "that the deep state likes".)

The media does not create authoritarian states. It certainly will (at least in the West) tend to downplay the authoritarian nature of states that are our allies and upsell the opposite. But that doesn't mean differences don't actually exist. The fact that a state run media arm should be regarded with suspicion (on reporting to do with anything to do with the government at least) is because historically that has been a problem. Our media didn't create that idea even if they over/undersell it depending on circumstance.

That's part of why it works; it's not untrue. No one can is going to die on the hill that Saddam doesn't have a "'regime". The words aren't necessarily wrong when it's used, the question is what's the impression given and why it's not used sometimes.

In theory its supposed to be an independent reporter on the government and not biased towards either the government of the opposition.

Even people defending the BBC often undermine the argument for its independence. Hell, the argument linked above is directly calling funding decisions (which the government has always been able to take) as direct partisan attacks.

If the point of the tag is to point out the level of possible bias then I don't think the BBC should get the same tag as a directly operated state organization.

IIRC the original tag that NPR complained about was something like "state affiliated media". If it was "state media" then I kind of get the charge.

"Government funded" though...I'm even less sympathetic. You can't complain that the government removing a funding mandate will crush either your programming or your reporting and also want to duck the tag.

If we agree with your take and we hate these terms for observable, rational reasons anyone can come to independently, not it being reinforced by media reminders every time a story from a hostile site comes up, then people can/will reach the same independent conclusion that Elon Musk did: that in a world of governments putting its fingers in the media pie the BBC surely counts as government funded but it represents a far more refined and civilized compromise than RT.

If not, as you say, there's literally no better placed, better branded organization to enforce a sense of nuance on any such tag just by existing.

I think the BBC is being a bit silly. However, why does Twitter only single out the governments? Why not say "Murdoch-funded" or "Bezos-funded" or "Roberts-funded"?

Well Twitter is itself Elon-funded. Somebody has to fund everything.

Because Bezos isn’t trying to manufacture consent for any wars he is waging.

Unless it's the war against the unionization of Amazon?

Is Bezos killing the families of people trying to unionize? Blowing up their homes? Maybe razing the towns they live in?

No.

This is the First World, one needs only to socially murder one's opponents as opposed to taking the drastic option of offering helicopter rides.

This is to say that I don't think there's really that much difference other than the continued existence of the biological entities in question. I imagine even the Soviet Union had more sophisticated ways of applying bootheels to human faces than Stalin's mass murdering.

No one asked but one point on this: The CBC in Canada is state-owned and state-funded, I couldn't really object to such a label being put on it. It's technically correct, and twitter can't really differentiate on vibes. But if anyone holds the belief that it's somehow on par with Russia Today, that's ridiculous: it has the same left-wing bias as as every non-right-wing network does, and for the same reasons: it's run by people with that bias. It continues to have that bias no matter who's in political power.

And as queasy as state-funded media might make me on principle, it's got plenty of competition from private entities, and the bias of wanting to spread sensationalism for views is also a problem (including with the CBC, who sells ads and likes views just like everyone), so having some variety in the ecosystem seems good.

But if anyone holds the belief that it's somehow on par with Russia Today, that's ridiculous

Is it?

I've seen a lot of really interesting and insightful programming on Russia today, more so than I have on the CBC (and I don't live in Canada, America or Russia). I don't really see why RT is getting called out for being uniquely bad when they've consistently been more correct on factual issues than the regular press. Hallucinations about Iraqi WMDs and Trump/Russia collusion are just two of the most prominent examples that come to mind (I'm sure people here don't need a long recitation of media perfidy), and I don't see any reason for the BBC to be privileged over RT on that rubric.

If you're saying you find their worldview more appealing, go for it. I'm talking about degree of state-control and overall mission. (If you think RT is better in that regard, then I am brainwashed by the Cathedral and you can ignore whatever I say.) Russia Today is a straightforward tool of the state, when Putin invades Ukraine he knows RT will say what he needs them to, journalists who defy this at risk of falling out windows. CBC does not operate anything like this, they're just part of same Blue-tribe that all think alike. Same with CNN in the Iraq War days, they supported the war for the same reason most Americans did, they were mad about 9/11 and in a patriotic mood, the journalists both felt this themselves and knew it's what their audience wanted.

CBC's mission is probably horribly corrupted by an activist worldview, but they still adhere to whatever mix of journalistic integrity/modern activism their average reporter can defend to themselves. Their gov't stipend has little to do with that and mischaracterizes the entire problem. RT is much simpler and easy to characterize.

I'm not saying I find their worldview more appealing - I know that they have a perspective/angle/bias, I just don't think this means I should pretend that CBC or western media in general does not. Sure, journalists who defy Putin can end up falling out of windows, but I fail to see how this is so much worse than journalists having their cars go out of control and spontaneously explode when they start investigating government wrongdoing or put out stories which badly embarrass military generals.

As for all the mea culpas about their handling of the Iraq war - sure, I believe that. But at the same time absolutely nothing has changed! There has been no reflection, no self-examination, no correction. There have been no retractions or corrections or award rescindments for the unadulterated fiction that was presented as coverage of the Trump/Russia scandal, and the same people are still using the same techniques to manufacture consent on other issues today. Ultimately, I just don't think "whatever mix of journalistic integrity/modern activism their average reporter can defend to themselves. " is worth anything at all, and it in no way justifies the removal of a factually accurate label like "government-funded" or "state media".

I caught this exchange after the previous thread had mostly closed, and I'd like to push back on the claim a little.

BinaryHobo:

I remember talk about just using the excess power to pump water up hill during the day and running it through turbines coming down at night.

Did anything ever come of that?

The_Nybbler:

The physical conditions necessary to make hydro storage practical aren't common.

(How do we do the fancy quotes with user, timestamp, and maybe a link? It'd be useful here.)

It's true that hydroelectric power sources, as in dams, have saturated the supply of naturally-occurring American sites. You need a river in a rocky valley, and there are only so many of those to go around, and once they're used up, it's very hard to create more of them.

What haven't been exhausted, and in fact what can be readily found or exploited, are height differentials in general. Hills, mountains, exhausted mines, deep valleys with no water supply, all offer significant height differentials, are naturally occurring, and can be readily built out into large-scale closed-loop pumped-hydro storage, with a closed reservoir at one extreme and a closed reservoir at the other, and a reversible turbine to generate potential energy in times of excess and power in times of deficit. Should those be exhausted, off-shore dropoffs are an enormous resource of the same, at the cost of more difficult installation and operation in every regard. And if we exhaust THOSE, water towers at sea or underground reservoirs on land can be constructed as well.

All of this, of course, is dumb and America should just take the leash off nuclear, as argued here. (I've not read it yet, but I expect it to make the points I would inline here.) That we haven't yet is a shame and a testament to our collective idiocy and Puritan hangover.

I said "practical", not "possible". Turning a given hill with neither water nor reservoirs into a useful amount of pumped storage isn't practical, even though it's "just" a matter of moving dirt, concrete, and water.

Clearly, this needs nuclear power to be feasible!

To back up you're pro-nuclear blurb, I would be way more worried (though still not particularly worried) about living underneath a pumped storage facility than next to a nuclear power plant. There have been multiple dam failures with death tolls in the thousands, much worse than any nuclear power disasters.

In terms of tail-risk, hydro is probably the most dangerous form of power around. There are dams around the world that could kill hundreds of thousands of people if they failed.

Another rarely discussed downside of hydropower is that it is extremely environmentally and socially destructive. Damming a river basically destroys its ecosystem. Dams also often flood very large areas, requiring people to evacuate and destroying anything that was there, natural or manmade.

For example, the Itaipu Dam:

When construction of the dam began, approximately 10,000 families living beside the Paraná River were displaced because of construction. (...) The world's largest waterfall by volume, the Guaíra Falls, was inundated by the newly formed Itaipu reservoir. The Brazilian government later liquidated the Guaíra Falls National Park. (...) The Guaíra Falls was an effective barrier that separated freshwater species in the upper Paraná basin (with its many endemics) from species found below it, and the two are recognized as different ecoregions.[18] After the falls disappeared, many species formerly restricted to one of these areas have been able to invade the other, causing problems typically associated with introduced species. For example, more than 30 fish species that formerly were restricted to the region below the falls have been able to invade the region above.

The construction of the Aswan Dam in Egypt flooded 5,250 km^2 and resulted in the relocation of 100,000 to 120,000 people and 22 Ancient Egyptian monuments.

For comparison, the Chernobyl Exclusion Zone has an area of 2,600 km^2. That is to say, the Aswan Dam rendered uninhabitable twice as much land as the Chernobyl disaster.

That's just one of many reservoirs all over the world. Looking at this list, if we exclude the reservoirs that resulted from the enlargement of pre-existing lakes and consider only the ones that are completely artificial, there are 15 reservoirs which individually rendered uninhabitable more land than the Chernobyl disaster. The total amount of land flooded by dams is many times greater than the Chernobyl Exclusion Zone. I don't have the exact figures, but the number of people displaced by dams is certainly also much larger than the number of people who were evacuated from the Chernobyl area.

Don't forget that this is a normal and accepted part of building hydropower, whereas the Chernobyl disaster was a one-time event that resulted from a combination of poor Soviet design and human error. If we considered the failures of dams, we'd get a death toll much larger than any estimate for Chernobyl.

It’s basically impossible to make a closed loop hydro system with practical capacity. You need constant water replenishment. You’ll be losing 10-30 cm of water per month to evaporation and seepage, depending on weather and soil condition. Without plentiful source of water, this is not viable.

And if we exhaust THOSE, water towers at sea

This one is extremely impractical, which you’d see if you even did a back of a napkin estimate. The fact that you mention this implies that you did zero legwork to verify if your ideas have even modicum of practicality.

This one is extremely impractical, which you’d see if you even did a back of a napkin estimate.

Source. They've tested successfully, physically, to 1/10 scale. I haven't gone and found the paper, I'll admit; I'll give it a shot ASAP so we can argue productively.

In the meantime, if the napkin math is so easy, share it with the class?

I'm not seeing estimates on the price to build and maintain that per kWh. Without that, yes, you've failed to do the basic napkin math on practicality.

This is not a “water tower at sea”. This is something different, actually quite smarter. I read their paper, and it doesn’t seem as immediately impractical as “water tower at sea” would, though it is still very much impractical.

According to their own analysis, the construction cost is something like 2-3x the cost of LiIon batteries per kWh. It’s something like $8M for storage equivalent to 2 minutes of operation of a single coal power plant. To build enough storage to replace one coal power plant for base load for half a day, you would need to build 400 of these, at a cost of $3.2B dollars. Coincidentally, this is about as much as it costs to build a nuclear power plant reactor of a similar size, which will keep generating the energy after the deep sea storage solution runs out of juice in 12 hours.

Even absent the replenishment concerns, the amount of height and/or volume for gravitational storage just isn't practical. A kilogram of hydrocarbon fuel has ~40MJ of contained energy. To store the same amount of gravitational potential energy in a kilo of (water, but really anything) requires lifting it 4000km.

I'm not familiar with the state of the art in biochemistry, but the energy density of hydrocarbon fuels would plausibly make them excellent storage if we could produce them (from non-fossil sources) with even moderate efficiency. Not to mention the existing infrastructure. That said, that is a nontrivial synthesis problem.

We already can do something similar to this by producing hydrogen gas from water. Hydrogen has an energy density of about 33MJ/kg, which is comparable to hydrocarbons, and production is relatively trivial. The problem comes in converting it back to usable energy, which requires complicated fuel cells that are relatively expensive, which is, I believe, the biggest reason why the simpler but inferior EVs got the edge on hydrogen as the "green" vehicle solution.

Hydrogen isn't currently a panacea: it's difficult to store long term: it isn't very dense at room temperature and liquefies at a difficultly-low temperature. It also likes to leak really easily.

I don't know that it can be made practical for vehicular applications, but if you're thinking about fixed energy storage infrastructure it's probably worth considering.

Converting it to methane, if you could do so scalably and efficiently, would make the longer-term storage problem (months) much easier.

Gravity storage with water as a medium is actually quite practical, and there are plenty of operational sites already, some with GWhs worth of capacity. You don’t have to lift 1 kg of water 4000 kms, you can instead lift 40 000 kg of water by 1000 meters.

This is practical and done in production, the problem is that you need a lot of water, and a lot of space to store this water in two separate reservoirs, which also need substantial difference in altitude. Because of this, it simply doesn’t scale: good sites are already mostly used, and we can’t build many more.

Synthetic hydrocarbons would make excellent store of energy, being very dense and already integrated in existing economy. The problem with those, though, is that the round-trip efficiency is really bad.

A US Energy.Gov analysis for this class of problems ("Long Duration Energy Storage") is here, with a tl;dr summary at here, with the even more tl;dr that on top of the various technical problems with large-scale expansion, there's the more immediate issue that even moderate-scale projects have been found hard and largely not tried, or 'tried' in the sense that they might get out of the planning stage in the late 2040s.

Thanks much, I'll review.

Large engineering projects involving water seem uniformly to be extremely difficult and politically fraught.

I wonder, is there anyone on The Motte who opposes nuclear power? Either because of concerns relating to safety, waste disposal and other "environmentalist" canards, or because it's supposedly uneconomical.

And if everyone here is pro-nuclear, why is that? Are mottizens just more rational than everyone else, or is it because of chronic contrarianism?

(How do we do the fancy quotes with user, timestamp, and maybe a link? It'd be useful here.)

Like embedding a Tweet? I don't think you can do that. But there's a "Copy link" button under every comment and you can put an @ in front of a username so that it links to their profile and they get notified.

I'm very faintly anti-nuclear because we need to import uranium from pretty far away and it makes us dependent on exporters, but I freely grant that this may be the least of many evils and actually looking at the numbers may convert me.

Based on user name you are German? Eyeballing a map, Western Europe might be the most Uranium scarce populated region. I'm not sure Western Europe has an energy option that doesn't require importing materials, or finished products requiring rare materials. The main advantage of of uranium would be relatively high energy density making up for some of the more logistically challenging freight issues. In North America, Canada has substantial reserves. The most complete reference for Uranium resources is the "Red Book", but you need to be a bit careful in interpreting the entries.

Neat. I did not know about German uranium mining until now.

The issue isn’t the logistics of shipping it. The issue is that it makes you vulnerable to blockades, embargos, trade disputes, and the “international community”.

Germany isn’t vulnerable to any of that, though.

Not right this second, but if the geopolitical situation shifts things could get quite uncomfortable. They aren't in a strategic location like the UK or the USA, and they don't have strong economic ties to resource-rich former colonies like France.

Germany has lost every war it ever fought as a direct result of being shut out of resource access through those methods, and the French lost their fair share of wars to this too (Napoleon).

I think it's fair to call that a strategic weakness.

They don't have natural gas, they don't have oil, they don't have good coal so they have trouble making steel, they don't have alloying elements so the metal (including steel) they are able to make is not the best quality. They have no large mountains for hydroelectric generation so their only indigenous source of electricity is coal. Food production is not their strong suit either (hence the need for Haber-Bosch, and really, why its inventors were German in the first place). Sure, they might have a highly educated and motivated society, but without any material/materiel to work with they're at the mercy of those who do.

This is also why the EU is a massive deal for Germany, because a politically-united Europe (under German economic control) means a Germany less constrained by American-English and/or French resistance to pursue its own policy goals. The "Fourth Reich" snark is not entirely untrue.

It's probably worth noting that that 38 t U was probably not from active mining. The link in the wiki is dead, but the current Red Book has a note for the recent entries were from mine water treatment and "In 2018, conversion work of the water treatment facility at the Königstein mine halted uranium production." None of the reserves in Western Europe are viable at current prices. In practice any Uranium used in Western Europe would probably be imported.

I think all the processing facilities in Germany have also been shut down, so processed ore would have to be imported from France, which itself sources Uranium from Canada, Gabon, Kazakhstan, Mongolia, Namibia and Niger. I think the French nuclear industry has an explicit goal of diversifying the locations it sources from.

None of the reserves in Western Europe are viable at current prices. In practice any Uranium used in Western Europe would probably be imported.

Of course you’d want to buy cheapest product you can get, but that is orthogonal to the concern /u/Southkraut raised, which is whether this would make you dependent on foreign sources of uranium. If you could mine your own uranium, at even twice the cost, you are not really depend on imports, and the final energy price will not even go up all that much.

is there anyone on The Motte who opposes nuclear power?

I don't, but I can make a compelling steelman case against it I think.

Nuclear energy as it exists requires huge powerplants and a deeply integrated industry to scale. Which means very high levels of complexity at every level. You need many well educated and actually competent engineers to setup, maintain, and teardown every part of a large system.

This is uniquely combined with another drawback, which is that the risks are black swans. Bad handling of nuclear can, as demonstrated in Ukraine, have vast and long lasting consequences.

Only the State, or an industrial complex deeply tied with it can realistically enact a nuclear power program, and any such program will necessarily be in the hands of large institutions and not individuals. Which is already problematic to any sort of libertarian: large abundant energy coming from a centralized source inevitably leads to higher levels of control. Not to mention that institutions of this size are seldom considered to be wise custodians.

Moreover, consider for a moment that societies collapse. That societies can never maintain complexity forever. Is it a good idea to hinge our entire civilization on a system so complex that a lot of nations couldn't be trusted with its operation today? Is it reasonable to even believe that if collapse happens, the complex system will be shut down in proper order?

To want for nuclear is to swear eternal vigilance against the Dyatlovs of the world. And that might be a cost too high for some.

Moreover, consider for a moment that societies collapse.

You were already reminding me of this sketch, and then this sentence made it perfect.

Are mottizens just more rational than everyone else, or is it because of chronic contrarianism?

Being pro-nuclear is hardly a contrarian position. At least according to this poll 76 % of Americans are pro-nuclear. Of course it's not like everyone here is American, but even then in Finland, for example, nuclear energy is currently very much the mainstream view, in essence accepted by all parties (even if some of them do so a bit uncomfortably).

If 76% of the population meaningfully favor nuclear power, why is it such a problem to build nuclear plants?

An excellent summary is given here .

Tldr, largely copypasta:

  • The American Nuclear Regulatory Commission uses a model of damage to humans by radiation called Linear No Threshold, in which no amount of exposure to radiation is safe. This contradicts casual observation (we live with and robustly tolerate background radiation), observed cellular mechanisms (detection and repair of small DNA errors is routine), and a small number of human longitudinal studies and animal studies.

  • American nuclear reactor operators are as a consequence required to minimize the risk of even innocuous, low-level radiation releases, which makes cost reductions as a result of the usual learning curve and technological advancement impossible.

  • Culturally, there is little education on the risks of small and medium-scale nuclear incidents, and so public opinion is by default against radiation leaks out of proportion to the actual risk. The book being summarized contrasts this with airline accidents, which kill hundreds and are handled as a risk to be minimized, not eliminated.

  • The NRC is incentivized to run the approvals process as long as possible, since it collects fees from license applicants, rather than number of nuclear power plants under oversight or number of GW-hrs generated by nuclear power per year. This naturally drives up the costs of site licensing and design approvals.

  • There are many avenues for anti-nuclear activists to cause delays in the construction of a nuclear power plant, causing massive uncertainty in construction schedules and costs.

  • A model reactor must be licensed before construction begins, but model reactors are often invaluable in experimentally finding failure modes to be accommodated, but all possible failure modes must be addressed before even a model reactor is approved for construction.

  • Chernobyl, Three Mile Island, Fukushima incidents have accumulated massive cultural scar tissue opposing more nuclear power plant construction.

Tldr of tldr: ignorant public, regulatory incentives, uncertainty in capex and opex spend.

76% of Americans support nuclear and support is higher among men and among conservatives, both over represented on the motte/

The few times I've talked to educated anti-nuclear folks, they've made it clear that they didn't understand the basics of nuclear waste or the dangers of radiation.

Are mottizens just more rational than everyone else, or is it because of chronic contrarianism?

As a pro-nuclear «chronic contrarian»: we can't be relied upon to distinguish the latter from the former. But I'd say it's the diminished vulnerability to threat models that appear poorly substantiated. We don't put much stock in «something may happen» stories.

For the same reason many here tend to pooh-pooh «the coof», Trump's «attempt at fascist insurrection», the danger of Russia or China, AGI risk, climate change, whatever, even school shooting and violence. On the other hand, we are highly suspicious of risk narratives that seem to justify reduction of freedom in all senses – from direct political ones to basic freedoms of exploring space and enjoying material abundance; degrowth ideology doesn't appeal to us at all. Inasmuch as there are conservatives and reactionaries here who profess to respect Chesterton's fences and the precautionary principle, it's not consistent but restricted to domains where change and action is heavily enemy-coded and in some ways still Puritan, statist and restrictive (e.g. CRT programming in schools).

Put another way, we aren't very contrarian. We're just non-neurotic males with a typical masculine attitude toward minor risks and risky-seeming things. The broader society and its consensus is… less like this.

Case in point:

It’s also enraged a bloc of stoutly anti-nuclear countries that includes Germany and Austria. Seven of them wrote a joint letter earlier this month warning that including nuclear-generated hydrogen could “jeopardize the achievement of … climate targets” and reduce ambitions on renewables.

“The attempt to declare nuclear energy as sustainable and renewable must be resolutely opposed,” Austrian Energy Minister Leonore Gewessler said after the deal.

Nuclear is quite bad if 1) you focus on tail risk of disasters (Chernobyl, Three Mile, Fukushima…) or mistaken estimates for base level harmfulness (such as consequences of waste leaks) and/or 2) evaluate nuclear by its cost per unit of output in the context of prohibitively expensive safety measures predicated upon its danger (assessments, plant designs and, again, secure waste storage over millenia). Put in the proper quantitative context, it's less dangerous per unit of power than most other energy sources. But there's no way to make coal or solar seem so spooky to a layman. I mean –wind, sun, it's all so nice, living in harmony with nature, what could go wrong! So what if we'll need to restrain our capitalist greed and consume a little less, give some rest to our mother Earth! Indeed, it'd be a positive if we got rid of capitalism even without any ecological benefit, some could say that's the whole point. Also, the precariousness of nature also means one can feel morally superior on account of normie unambitious urbanite life choices.

The optics accessible to midwits are just bad, built into every facet of culture from fiction tropes about evil power sources to signs on trash containers; whatever your nerdy arguments, generations of shallow artists competing for NGO grants (with the intent to suffocate, debase and diminish humanity under the guise of rational planning) have conscientiously labored to make it this way.

Not much to do about it now but remind them of the human cost of their actions, meticulously calculated.

For the same reason many here tend to pooh-pooh «the coof», Trump's «attempt at fascist insurrection», the danger of Russia or China, AGI risk

Do people on the Motte not take AGI risk seriously? I thought I was the only one here who thought it was overblown.

Did you not see the AI threads the last week? There are plenty of us anti doomers here.

I know that I take it seriously, but I don't take it seriously because I think I'm going to be turned into a heap of paperclips or atomized by a T-1000. I take it seriously because I see something else coming, a paradigm shift in propaganda and narrative control powered by LLM's, image/video generators and AI-assisted search engines (I'll confess that I may be a little too unironically Kaczynski-pilled). I don't see how the future I envision is any less apocalyptic than the one our loveable quokkas fear, however.

Do people on the Motte not take AGI risk seriously?

I don't; I'm more afraid of the economic enclosure potential that will likely result, to say nothing of the power these tools will bestow upon the State. The last 60 years have been bad for civil rights and that was just the result of normal economic centralization; this, by contrast, is advanced centralization.

Most people here seem to take it very seriously although metacontrarians exist.

For me, AI risk is completely different to all nearly other x-risks including asteroids, nuclear war, climate change, etc... Because the risk from AI cannot be quantified. I ask myself, what would a superintelligence do? I have no fucking clue. And neither does anyone else. People saying, "I'm not worried about X, I'm worried about Y" are missing the point. While it's fun to speculate about X or Y, it is impossible to predict what a superintelligence will do. It's a true unknown unknown. AI risk is nearly unique in that way.

No, the whole point of what you believe is «metacontrarianism» is that it's entirely possible to predict what a superintelligence will do, when we know what it has been trained for and how exactly it's been trained. Terry Tao is a mathematical superintelligence compared to an average human. What will he do? Write stuff, mainly about mathematics. GPT-4 is a superintelligence in the realm of predicting the next token. What will it do? Predict next token superhumanly well. AlphaZero is a tabletop game superintelligence. What will it do? Win at tabletop games. And so it goes.

Intelligence, even general intelligence, even general superintelligence, is not that unlike physical strength as the capacity to exert force: on its own, as a quantity, it's a directionless, harmless capability to process information. Instrumental convergence for intelligence, as commonly understood by LWers, is illiterate bullshit.

What I admit we should fear is superagency, however it is implemented; and indeed it can be powered by an ASI. But that's, well, a bit of an orthogonal concern and should be discussed explicitly.

I'm sure you know about mesaoptimizers. Care to explain why that doesn't apply to your thesis?

That said, I'm not particularly married to any one particular flavor of AI risk. I'm taking the Uncle Vito approach. The AI naysayers have been consistently wrong for the last 5 years, whereas the doomers keep being proven correct.

I know what people have written about mesa-optimizers. They've also written about the Walugi effect. I am not sure I «know» what mesa-optimizers with respect to ML are. The onus is on those theorists to mechanistically define them and rigorously show that they exist. For now, all evidence that I've seen has been either Goodhart/overfitting effects well-known in ML, or seeing-Jesus-in-a-toast tier things like Waluigi.

To be less glib, and granting the premise of mesa-optimizers existing, please see Plakhov section here. In short: we do not need to know internal computations and cogitations of a model to know that the regularization will still mangle and shred any complex subroutine that does not dedicate itself to furthering the objective.

And it's not like horny-humans-versus-evolution example, because «evolution» is actually just a label for some historical pattern that individual humans can frivolously refuse to humor with their life choices; in model training, the pressure to comply with the objective bears on any mesa-optimizer in its own alleged «lifetime», directly (and not via social shaming or other not-necessarily-compelling proxy mechanisms) . Imagine if you received a positive or negative kick to the reward system conditional on your actions having increased/decreased your ultimate procreation success: this isn't anywhere near so easy to cheat as what we do with our sex drive or other motivations. Evolution allows for mesa-optimizers, but gradient descent is far more ruthless.

…Even that would be something of a category error. Models or sub-models don't really receive rewards or punishments, this is another misleading metaphor that is, in itself, predicated upon our clunky mesa-optimizing biological mechanisms. They're altered based on the error signal; results of their behavior and their «evolution» happen on the same ontological plane, unlike our dopaminergic spaghetti one can hijack with drugs or self-deception. « Reinforcement learning should be viewed through the lens of selection, not the lens of incentivisation».

Humans have a pervasive agency-detection bias. When so much depends on whether an agent really is there, it must be suppressed harshly.


The AI naysayers have consistently been wrong for the last 5 years, where the doomers keep being proven correct.

I beg to differ.

The doomers have been wrong for decades, and keep getting more wrong; the AI naysayers are merely wrong in another way. Yudkowsky's whole paradigm has failed, in large part because he's been an AI naysayer in all senses that current AI has succeeded. Who is being proven correct? People Yud, in his obstinate ignorance, had been mocking and still mocks, AI optimists and builders, pioneers of DL.

You are simply viewing this through the warped lens of Lesswrongian propaganda, with the false dichotomy of AI skepticism and AI doom. The central position both those groups seek to push out of the mainstream is AI optimism, and the case for it is obvious: less labor, more abundance, and everything good we've come to expect from scientific progress since the Enlightenment, delivered as if from a firehose. We are literally deploying those naive Golden Age Sci-Fi retrofuturist dreams that tech-literate nerds loved to poke holes in, like a kitchen robot that is dim-witted yet can converse in a human tongue and seems to have personality. It's supposed to be cool.

Even these doomers are, of course, ex-optimists: they intended to build their own AGI by 2010s, and now that they've made no progress while others have struck gold, they're going to podcasts, pivoting to policy advice, attempting to character-assassinate those more talented others, and calling them derogatory names like «stupid murder monkeys fighting to eat the poison banana».

Business as usual. We're discussing a similar thing with respect to nuclear power in this very thread. Some folks lose it when a technical solution makes their supposedly necessary illiberal political demands obsolete, and begin producing FUD.

Good point about mesaoptimizers and the difference between evolution and gradient descent.

The onus is on those theorists to mechanistically define them and rigorously show that they exist."

Here's where I disagree. As someone once said, "he who rules is he who sets the null hypothesis". I claim that the onus is on AI researchers to show that their technology is safe. I don't have much faith in glib pronouncements that AI is totally understood and safe.

Nuclear power, on the other hand, is well understood, has bounded downside, and is a mature technology. It's not going to destroy the human race. We can disprove the FUD against it. But in 1945, I might have felt differently.

More comments

The broader society and its consensus is… less like this.

Well, yeah; they don't currently perceive the barbarians are at the gates.

And unfortunately for those [men] whom the existence of barbarians is a time-tested way to extract payment and investment from broader society in exchange for security guarantees (and has been since the dawn of humankind), they're correct; this is why the entire society must rationalize its newly-enabled refusal to pay them.

Hence, degrowth as religion; men staying in one's parents' household until they're dead would in a normally-functioning society be hideously perverse, but it's certainly a clear reminder of the human cost of the actions of their social cohort (and probably the rational thing to do in a society like this).

Yes, investing in growth is objectively the right thing to do, and will make the society even stronger in the long run, but why do that when you can just hoard your gains until death takes them from you?

This is shoehorning of NIMBYist sentiment. Do you suppose the harm of coal or even «renewables» is only personal, and doesn't enshittify the environment and cultural signs around it? Your nice bucolic ancestral village soiled with dust coal (radioactive one, in fact) or surrounded by acres and acres (I've seen fields, Neo, vast fields) of soulless solar panels and noisy wind turbines will lose much of its sentimental value anyway, like a traditional home covered in garish plastic siding.

I would support not building major infrastructure near residential spaces when at all affordable, regardless of safety. There are plenty of mostly uninhabited 10-mile radiuses out there, or at least radiuses very few people would object to vacating if provided some compensation; you don't need a nuclear power plant near your grandma's gingerbread house any more than you need Springfield at the foot of Mr. Burns' power plant. These things don't employ a ton of people.

(And, yes, new reactor designs are vastly more safe than that 3 or 4/500 level 6+ disasters over 70 years figure suggests).

you don't need a nuclear power plant near your bucolic ancestral village any more than you need Springfield at the foot of Mr. Burns' power plant. These things don't employ a ton of people.

Nuclear power plants generally need access to water; water attracts people for other reasons, so it's pretty hard to site them all away from people.

Offshore then.

I used to see the cooling towers of the Limerick-Linfield NPP on the way home every day. Biggest issue was that every time I'd see it I'd hear "The Simpsons" theme in my head.

As for "major disasters" with nuclear power plants, there's actually only been one which wiped out anyone's hometown, that being Chernobyl. TMI didn't wipe out anything and Fukushima happened in the middle of a much larger disaster.

There were probably people who really, really liked living in Chernobyl.

Incredibly minor nitpick: the major population center was Pripyat, not Chernobyl (which had less than a third the population, at the time).

To more seriously engage you in opposition, the Chernobyl disaster was (more or less) the first of its kind and singularly unique as well, in terms of nuclear powerplants disastrously failing. Three Mile Island is also a weakly cautionary tale in the sense of uninhabitability, which cuts down on the total number of your negative examples.

Admittedly it's a volatile technology whose use holds a potential for truly devastating outcomes, but there's no reason to think we've more or less accounted for the common failure modes. Human error remains the most pernicious (and universal) of potential flaws in the use of nuclear energy but I, personally, believe that the potential negative outcomes of nuclear power are so mollified by current safety advances that I would be comfortable living within ~5 miles of a nuclear powerplant. I say this as someone who does not fall into your outlined demographics.

The issue seems to be perception of risk rather than actual risk. Of the famous nuclear disasters, Three Mile Island didn't kill anyone, nor did the radiation from Fukushima. That leaves Chernobyl as the only incident resulting in deaths and the permanent evacuation of the vicinity. That's a pretty good record compared to something like coal mining, which kills thousands of people every year, yet people in those towns often vigorously defend the industry and the jobs it brings their communities, something you don't see with nuclear power.

The few times I've witnessed anti-nuclear people talking, to me or someone else, without being your garden "but Fukushima/Chernobyl" types, it's clear that they're operating on some level of that old meme.

"Nuclear power can't be the solution because it only produces clean energy -- it doesn't upend society and usher in my preferred utopia."

Sometimes they're watermelon activists and this utopia is communism; sometimes they're old hippie earth mother types who want us to live in harmony with Mother Gaia and ride bikes everywhere and be vegan. Regardless, this is the thread I see as common through most of them; nuclear solves the problem in a way that doesn't get them what they really want, so it must be objected to on whatever tenuous grounds possible.

Nuclear power is strictly superior. You can do it anywhere you can stick a big turbine (so anywhere near a lot of water). Most of civilization is also near large bodies of water like rivers, lakes or seas, so it's not like there's a shortage of places to build them. Renewables need lots of sun or wind or certain kinds of rivers with big height differentials...

The power is very reliable, capacity factor is around 80-90% compared to renewables which struggle to reach 40%. Nuclear plants are usually only offline for maintenance and refuelling, so their off-time is fairly predictable. Renewables often produce their power when it's not needed and go offline randomly, demanding extremely expensive batteries.

Ecological damage is minuscule, in the grand scheme of things. Everyone knows about Chernobyl but few know about the 1.4 million people who were relocated to make way for the Three Gorges Dam. In Australia, the original purpose of the nascent Green movement was opposing a hydroelectric dam in Tasmania which would need to clear a lot of trees. They wanted us to burn more coal instead. Renewables use up hundreds of times more land than nuclear, wind turbines mince up many birds. They all need immense amounts of extra transmission cables which will further mar the rural skyline. Nuclear plants are so small they can be contained within large ships and even submarines. They can be built near where they're needed, for cheaper transmission.

And the waste products from nuclear power plants are trivially easy to manage! There's so little waste in terms of mass that it can be stored on-site. Because the US decided to bungle its nuclear waste dump in Yucca mountain (to the tune of tens of billions of dollars with absolutely nothing to show for the money), waste has just been left with the plants. A simple and easy solution is to put waste in a big lead-lined box and take it away, stick all the boxes in a warehouse in the desert and leave some guards to protect it, from a safe distance. An even better solution would be to actually use breeder reactors to turn that waste into electricity. U-238 can be converted into plutonium and provide power, it's possible to burn all the uranium not just the U-235.

Finally, the price of nuclear energy, without the sabotage of power-plant construction, is very low. It's only the farcical hysteria that drives up costs. They invented ridiculous standards of safety. From: https://rootsofprogress.org/devanney-on-the-nuclear-flop

An example was a prohibition against multiplexing, resulting in thousands of sensor wires leading to a large space called a cable spreading room. Multiplexing would have cut the number of wires by orders of magnitude while at the same time providing better safety by multiple, redundant paths. A plant that required 670,000 yards of cable in 1973 required almost double that, 1,267,000, by 1978, whereas “the cabling requirement should have been dropping precipitously” given progress at the time in digital technology.

Another example was the acceptance in 1972 of the Double-Ended-Guillotine-Break of the primary loop piping as a credible failure. In this scenario, a section of the piping instantaneously disappears. Steel cannot fail in this manner. As usual Ted Rockwell put it best, “We can’t simulate instantaneous double ended breaks because things don’t break that way.” Designing to handle this impossible casualty imposed very severe requirements on pipe whip restraints, spray shields, sizing of Emergency Core Cooling Systems, emergency diesel start up times, etc., requirements so severe that it pushed the designers into using developmental, unrobust technology. A far more reliable approach is Leak Before Break by which the designer ensures that a stable crack will penetrate the piping before larger scale failure.

A forklift at the Idaho National Engineering Laboratory moved a small spent fuel cask from the storage pool to the hot cell. The cask had not been properly drained and some pool water was dribbled onto the blacktop along the way. Despite the fact that some characters had taken a midnight swim in such a pool in the days when I used to visit there and were none the worse for it, storage pool water is defined as a hazardous contaminant. It was deemed necessary therefore to dig up the entire path of the forklift, creating a trench two feet wide by a half mile long that was dubbed Toomer’s Creek, after the unfortunate worker whose job it was to ensure that the cask was fully drained.

The Bannock Paving Company was hired to repave the entire road. Bannock used slag from the local phosphate plants as aggregate in the blacktop, which had proved to be highly satisfactory in many of the roads in the Pocatello, Idaho area. After the job was complete, it was learned that the aggregate was naturally high in thorium, and was more radioactive that the material that had been dug up, marked with the dreaded radiation symbol, and hauled away for expensive, long-term burial.

Without these clownish safety requirements, nuclear energy would be cheap (like it is in South Korea), we could allocate the enormous sums invested in renewables elsewhere, reliance upon the Middle East would be lower and millions of people wouldn't have choked to death from air pollution. The death toll of nuclear energy speaks for itself, it is amongst the safest in deaths per gigawatt.

Fuel is not a problem. Breeder reactors can increase the fuel available to us by 50x, burning the U-238 that is in the majority. Thorium can also be burnt if we bothered to develop the technology. Fuel costs are only a small contributor to the cost of nuclear energy, so the price of uranium could be doubled to incentivize exploration. We could work out how to extract uranium from the ocean as well. The history of nuclear energy is littered with technologies that were just abandoned since fuel was so cheap, nobody could be bothered developing them. (Molten salt reactors are one such innovation).

The pragmatic argument against nuclear power is just that no one seems willing to fund R&D or construction of new plants. It may be that everyone is reluctant for stupid reasons, but that's the way things stand. I'm still hopeful that some form of improved, miniaturized reactors will be developed despite these obstacles, and be available if the energy situation ever becomes desperate enough for governments to come to their senses and build them.

The pragmatic argument against nuclear power is just that no one seems willing to fund R&D or construction of new plants.

Because the regulatory barriers are insurmountable.

For literally every rich country? I know that there's some amount of conspiracy regulatory coordination that goes on between different countries, but that seems like a weak argument at the world scale.

For literally every rich country?

China is building new nuclear plants.

China is not a country rich enough to afford environmentalism. (All relevant militaries have nuclear navies for that reason as well.)

That said, China (like Ukraine before them) is also a nation whose culture favors the creation of Dyatlovs; the grand irony is that cultures competent enough to not have their reactors explode in any meaningful way are... also all cultures prosperous enough to afford not building them.

If US citizens were in any position to trust their government like they were coming out of the wartime economy... well, that's why they were able to build out nuclear as rapidly as they did in the first place. Unfortunately, there's no ascendant Germany (or China, for that matter) to salvage that depression in trust this time.

And if everyone here is pro-nuclear, why is that? Are mottizens just more rational than everyone else, or is it because of chronic contrarianism?

Remember The Motte is an intellectual offshot of a movement that finds torturing someone for fifty years to prevent 3^^^3 people from getting dust in their eye a compelling moral dilemma. Ask the average Jane on the street. She'll choose dust and not even think about it.

We're in a filter bubble that selected for people inclined to considerations like "the realistic alternative to a new nuclear plant now is coal, and coal emissions will cause Y QALYs lost to cancer, compared to Z risk of a nuclear accident causing a few thousand deaths and making a 30mi by 30mi area around the plant uninhabitable for 3000~ years". Even doing these calcuations mentally and filling in made-up statistics, I think the case for nuclear ends up looking rosy.

Most people, even most pro-nuclear people, do not think this way.

OK, I'll bite. I'm not anti-nuclear, but hardly pro either. 20 years ago I was enthusiastic, but overall now I think that nuclear has only a modest role to play.

Nuclear never has never been particularly economically attractive -- successful programs have needed to be subsidized by states for national security reasons. The predictable costs are huge and mostly occurred before the plants even come online. The unpredictable costs of accidents, attacks, and proliferation are really hard to value, and require large states or as yet imperfect international control systems. The technologies needed for nuclear to be perform at its best (small safe thorium reactors and the associated reprocessing networks) aren't yet developed. Overall nuclear wins only if you want a to build a power source 15 years from now, to deliver stably priced energy in a stable environment for the next 70 years.

But that's not what we want. We want power sources that can be built in 1 year, that and are priced for a lifetime of 20. We need technologies that can be deployed at a local scale and are immune to political disruption.

But that's not what we want. We want power sources that can be built in 1 year, that and are priced for a lifetime of 20. We need technologies that can be deployed at a local scale and are immune to political disruption.

I'm sorry, but unicorns do not exist.

Many renewables, and some fossil fuels if you ignore fuel supply, satisfy those requirements. Pretty much everything meets them better than nuclear.

None of those are immune to political disruption, you can't ignore fuel supply, and very little grid scale anything can be built in 1 year.

Solar photovoltaic, for example, doesn't need international, state, or even regional stability to function, which nuclear does. Solar installations are trivial to build -- even the largest installation, Noor Abu Dhabi only took 23 months. Onshore wind is similar. Natural gas, AFAIK, has become very quick to construct, though the political coordination required extends across to the extraction area, as you say. The point isn't the precise details -- nothing's perfect -- but rather the huge difference in time and risk profiles with nuclear.

Solar PV also has that irritating issue of only working in the daytime; "working all the time" is one of those criteria left off your list. And they're not hard to build engineering-wise but permitting/environmental requirements mean they're still politically hard to build unless you're an Arab prince in your own country.

Wind and natural gas have worse political problems than solar.

Certainly they fit the particular criteria you mentioned better than nuclear. But I don't see why those should be the overriding criteria, especially if your preferred sources don't meet them either.

It makes no sense to pump excess power uphill and then run it it downhill because the act of elevating water also consumes energy.

  • -18

You use it like a rechargeable battery. The power grid is constantly having to deal with balancing the electricity produced vs the demand for electricity and this could provide a solution to that.

The goal here is storing energy for use later at a net loss, not harvesting energy for "free" like a dammed river reservoir that sometimes gets rained on. Inefficiency is acceptable, although to be minimized.

Pumped storage has been a thing for almost a century now. I leaned about it in science class as a child.

good for you

  • -24

You do understand that energy demands fluctuate throughout the day and spending energy when it's available and cheap to have it when it is in demand and expensive is important? I was taught that as a child.

I learned algebra as a child. so what. people learn different things at different ages.

  • -10

Grid load balancing is pretty fundamental to energy policy. Even ordinary people are bombarded by messaging telling them not to run appliances in the middle of the day.

?

Where? I have never been told that once in my life*, let alone been bombarded with it.

*not even as a child, since that's apparently important.

I've never heard this message directly, but I recall being told of the message in reference to preventing blackouts in other parts of the USA than where I live, I think California during the summer (air conditioning would stress the grid). More generally, during the past 10 years, with electric vehicles being big in the public consciousness, I recall conversations about how EVs wouldn't stress the power grid much, explicitly because they can be charged at night for the day's use, and the energy needs at night tend to be much lower than during the day (since people tend to sleep, and sleeping people tend to use less electricity), thus providing slack in the energy grid which is meant to function during much higher load times in the day. I've also heard it in reference to renewable energy and its advantages and disadvantages, such as how solar energy tends to produce energy when it's sunny which also tends to correlate with when there are greater energy needs, versus wind energy which tends to produce energy when it's windy, which for whatever reason tends to correlate with night when there are lesser energy needs.

Perhaps it's overselling it to say laymen are "bombarded" with such messaging, particularly specifically about not running one's own appliances during the day, but from my experience, the importance of load balancing for keeping the power grid functioning well is pretty well emphasized in the public messaging.

This was low effort and antagonistic. If you were a new user it would just be a warning, but you have been around long enough to know the rules. 1 day ban.

boo -- parent was also antagonistic and condescending. I don't personally care for bans in this situation, but if you're going to mod that comment you should be doing the whole chain while you're at it.

boo -- parent was also antagonistic and condescending. I don't personally care for bans in this situation, but if you're going to mod that comment you should be doing the whole chain while you're at it.

Good point. In isolation the comment looked not good, but not bad. But the followup comment made it clear they were trying to throw an insult.

Pumped storage has been a thing for almost a century now. I leaned about it in science class as a child.

@Hyperion this comment of yours was not very good. The first sentence seems fine, but maybe lacking context for why you are bringing up the point. The second sentence could come across as antagonistic.

You do understand that energy demands fluctuate throughout the day and spending energy when it's available and cheap to have it when it is in demand and expensive is important? I was taught that as a child.

Your followup comment made it clear you comparing greyenlightenment's knowledge to that of a child. Which is antagonistic. One day ban for you as well, since you have also been here long enough to know better.

TYFYS

So would any other form of storage. The question is how much.

Personally, I doubt that it's worth the cost to use this technique. But scaling up renewables would require some sort of buffer.

Demand for electricity varies throughout the day. People are willing to pay more for a kWh during rush hour than at 2 AM.

For example, suppose the 2 AM discount is 50%. That means that if you can spend 1.5 kWh at 2 AM to get 1 kWh at rush hour, you will earn 33.3% more than if you had just sold that energy at 2 AM.

I think thermal energy storage in tanks of molten salt or other high heat capacity fluid makes a lot more sense than a complicated hydraulic setup and would also have a much smaller footprint. A pump system would have many more potential failure points and intermediate losses and be vulnerable to changing weather conditions (if you aren't building a massive dome over it like some lunar colony).

(How do we do the fancy quotes with user, timestamp, and maybe a link? It'd be useful here.)

Like on old-style forums? That functionality doesn't exist here, I think, you'd have to do it the hard way.

And yeah, I don't imagine there's really that hard a limit on potential pumping sites and all that, but as you say, it's kind of stupid chasing after this when we have literal Cold War-era technology for delivering gobs of power.

This would be more appropriate for the Friday Fun Thread.

PSA: Reminder that we have a semi-official podcast called The Bailey. Please message me here or email me at ymeskhout@gmail.com if there are any topics you'd like to discuss. Especially so if it's something I've already covered that you disagree with me strongly on, I imagine there would be significant interest in putting me in my place! If it's a topic I'm unfamiliar with, it would be helpful to also suggest discussion compatriots you think would be a good fit.

Is this where you meant to reply?

It certainly didn't end up where I meant for it to go.

What now? Delete?

That, or maybe just link to wherever you intended it.

Inferential Distance part 2 of ?: Minsky's Marvelous Minutia, or why I'm bearish on GPT

This post is a continuation of / follow up to my post on Inferential distance from a month ago, inspired by the recent discussions of GPT-4 and @ymeskhout's comments on prosecutorial immunity. I also feel like this might end up turning into a series, hense the "part 2" and the question mark.

Two things that came up in that previous conversation were a) the apparent differences between thing-manipulators and symbol-manipulators. That is people's whose job, hobbies, day-to-day life revolve around manipulating objects and those whose lives revolve around manipulating symbols/feelings. And b) the question of what constitutes a "hard" social problem, and how/why thing-manipulators and symbol-manipulators seem to have such wildly diverging opinions on that question.

For a bit of context my degree is in math but entering the field as I did, later in life having already spent 12 years in another career, I tended towards the more applied/practical side of the discipline. This tendency seemed put me at odds with a lot of my instructors and fellow students, especially the "nerdier" sort. That is those who were "nerdy" even by the relative high standards of nerdiness expected from someone pursuing an advanced degree in mathematics. for whatever reason showing an interest in applications was kind of looked down upon. To be fair, I did understand where they were coming from. From a young age we're trained to admire the brilliance of guys like Pythagoras, Leibnitz, Newton, Euler, Keppler, Einstein, Et Al. Afterall, why does anyone even bother to study math if not to follow in those men's footsteps and unlock the grand fundamental truths of the universe? In contrast, while the principals of kinematics, control laws, and signal processing, may be mathematically intensive they also come across as very pedestrian. Pure math guys seem to regard them with a sort of casual disdain, the sort of thing you delegate to unpaid interns and teachers' assistants. Meanwhile truth is you can build yourself a pretty good career working on control laws and signal processing, just not in academia.

This brings us to the question of what constitutes a hard problem. If you spend enough time working in robotics or signal-processing, you'll eventually come across Moravec's Paradox. The paradox is best summed up by this xkcd comic from 2014, specifically the alt-text which reads...

In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it.

...the "paradox" being that many functions that we consider baseline, and accordingly take for granted, are in fact extremely complex and computationally intensive. Whereas much of what we might label "higher reason" is actually quite simple and requires very little in terms of memory or processing power.

It turns out that it's relatively easy to teach a computer to play chess better than a human or to come up with mathematical proofs that are both novel and correct. And yet, after 60 years, despite the truly massive advances in both hardware and software represented by projects like stable diffusion Minsky's problem remains far from solved. In practice, you can pretty much graph straight line between the simpler a task seems/earlier a it appears in the evolutionary enviroment, to how hard it will be to replicate. Playing chess is easy, Bipedal locomotion is difficult. Bipedal locomotion only seems easy to creatures like you and me because we've been doing it since we were two-years-old, and our ancestors spent millions of years refining the techniques and bio-mechanics that were bequeathed to us as infants.

What does this have to do with anything? My answer is that I feel like a recognition/understanding of Moravec's Paradox is one of the major components of inferential distance between myself and most others both in the rationalist movement, and in academia. It is why I am reflexively skeptical of grand unified social/political theories. and It is also why I remain deeply skeptical of GPT and the oncoming AI apocalypse it allegedly represents.

One claim you'll see guys like Elizer Yudkowsky, Bryan Caplan, and posters here on TheMotte make on a semi-regular basis is that "GPT knows how to play Chess". But if you press them on the topic, or actually look at chess games that GPT has played it becomes readily apparent that GPT makes a lot of stupid and occasionally outright illegal moves (eg moving rooks diagonally, attacking it's own pieces, etc...). What this demonstrates is that GPT does not "know how to play chess" at all. At least not in the same sense that Deep Blue or my 9-year-old can be described as "knowing how to play chess", or AlphaGo can be described as "knowing how to play Go".

Furthermore, once you start digging into their inner workings this lack of "knowing" appears to be a fundamental weakness of the Large Language Model architecture. At the end of the day it's still just a regression calculating the next most plausible word (or in the case of GPT-4 string of words) based on the correlations found in it's training data. Granted GPT-4 is certainly a step up from GPT-3 in terms being able to pass as human. The shift towards correlating longer statements rather than individual words seems to have plastered over a lot of the jarring discontinuities that made GPT-3 generated posts so easy to pick out. In contrast GPT-4 can actually kind of pass for human from the proverbial 50 ft away. Unlike prior GPT iterations, identifying it actually requires a level of careful reading or some sort of interaction.

Eugene Volokh's posts on Large Libel Models probably deserves a discussion of their own but INAL and not really interested in questions of liability. In any case he ends up running into the same issue with GPT that I did. Users here talk about instances of GPT "lying" or "hallucinating" and how to reduce the frequency of such instances, but the conversations inevitably devolve into self-referential nonsense because neither of these terms really describe what is actually happening. In order to "hallucinate" one must first be able to perceive. In order to "lie" one must first understand the difference between true and false. and GPT possesses neither. Simple fact is ask GPT for five examples of prosecutorial misconduct complete with citations and newspaper quotes and it will provide the names of five prosecutors, their alleged crimes, some juicy quotes, and supposed case numbers. However while the names provided might actually be real prosecutors, and the media outlet quoted might be a real outlet, if you actually look up the court records or try to find the quotes you're going to come up short because the example was not something that was pulled out of memory and provided, it was "generated" form the prompt in exactly the manner that a Large Language Model is designed to do.

to be continued...

edit: fixed link

...and now the conclusion

From the outside it might seem like a straight-forward fix to just add a line to the prompt that says "only provide real quotes / true statements". but to implement such a function requires imbuing GPT with an understanding of the difference between "true" and "false" and between "real" and "imagined". That is a real hum-dinger of a problem. It is such a hum-dinger of a problem that there is an entire branch of philosophy devoted to discussing it, that being Epistemology. As simple and foundational to computer science as the concept of a boolean might be, this is not a problem I see getting solved anytime soon.

Accordingly, when i see some AI-doomer post about how GPT-4 has passed the BAR exam in some state or gotten an A on Bryan Caplan's mid-term economics exam, my first thought is in not "oh shit here comes the fast take-off". It's more "and just how diligent were people grading the papers being?". In one of those threads the topic of grading on a curve came up and the question was asked why should we ask professors to go through the effort of calibrating tests to the material when it is so much simpler/easier/more efficient to ask a spread of arbitrarily difficult questions and award the top x% of answers 'A's. I ended up biting my tongue at the time because my knee-jerk response was something to the effect of "because that's fucking retarded and ultimately defeats the purpose of even administering a test in the first place" But upon a moment's reflection I realized that was a very "thing-manipulator" thought to have.

Thus we come back to the issue of inferential distance. I struggle to articulate just how brain-meltingly stupid and arbitrary the whole concept of "grading on a curve" seems to me. But I also recognize that grading on a curve is a widely accepted practice. From this I infer that my concept of a test and it's purpose is wildly different from that of Bryan Caplan and a lot of other users here on theMotte.

Perhaps this is my "thing-manipulator"-ness talking, but it seems intuitively obvious to me that if a teacher or professor is grading on a curve, they are not grading you on your capability or knowledge of the subject. and if they are not grading you on your capability or knowledge of the subject what re they grading you on? It seems to me that if a teacher and their students are on their game it should be possible for 100% of a class to earn a 100% grade. Just as if manufacturing is truly on the ball it should be possible to achieve a 100% pass rate from the QA department. Granted this never actually happens in the real world because life is imperfect but it's something to strive for isn't it? A man might just find himself a member of the '72 Dolphins.

What is the purpose of a test or inspection in the first place if not to verify capability?

Ironically, I think the real existential threat posed by GPT is not to humanity but to humanities professors. I would argue that if Caplan had been grading his students on their knowledge and understanding of the material (as he ought to have been from the outset) he wouldn't have found himself in this pickle. That GPT-4 got an A on Caplan's mid-term is not evidence that GPT-4 understands economics or history, it's evidence that Caplan does not understand his role as a educator. GPT is essentially the prefect Post-Modernist, and in so being it is exposing post-modernism and the entire edifice of contemporary academia as the farce it's always been.

The cynical bastard in me suspects that the reason "Symbol-Manipulators" seem to be so freaked out about GPT is that it represents a fully-automated-luxury-gay-reductio-ad-absurdum of everything that they think they know.

This post explains the source of much of my skepticism of AI better than I could. But the idea of LLMs as ultimate postmodernists insofar as they are masters of language and nothing else is a key insight that I'm mad I didn't think of first.

Of course this is no accident since the very idea is just a sophisticated generalization of Markov chains which were famously great at generating pomospeak.

But it is getting to the level where it might have practical utility now.

Provided nobody finds an unfavorable equilibrium in the AI detector arms race, or at least none that also would allow human nonsense, this tool could be the final solution to the problem of credentialism.

Why indeed listen to the academic if you can replicate him with AI well enough that you could get his diploma without needing but to press a button? And then we can merrily go back to judging shamans through the only metric that matters ultimately: whether the hexes work or not.

...and that's a bingo. (Insert your preferred Christoph Waltz meme as you see fit.) ;-)

Perhaps this is my "thing-manipulator"-ness talking, but it seems intuitively obvious to me that if a teacher or professor is grading on a curve, they are not grading you on your capability or knowledge of the subject.

And if you look at the actual content of the test, you will note that you are entirely correct that it's not a test of "how well have you internalized the principles of economics", it's a test of "do you agree with Bryan Caplan's politics".

some of the questions are weirdly worded too, like this one

T, F, and Explain: Evidence of severe credit market imperfections will make you more eager to continue your education, but evidence of severe externalities of education will not.

He means asymmetric information or default structure that benefits the borrower , thus incentivizes higher education, which it's plausible he is right. But it's more like an article of faith than something rigorous.

"Do you agree with [professor]" is the subject of every university-level exam.

Oh come now.

I had a philosophy class in undergrad with a professor who had published extensively on a particular topic in philosophy of mind, and he was quite proud to be known as one of the leading experts in his particular sub-field. For my final paper in the class I ended up disagreeing extensively with his views - and he let me know this by writing a thorough rebuttal for almost every paragraph in my paper - but he still gave me an A because he thought my paper was well argued.

Ironically, I think the real existential threat posed by GPT is not to humanity but to humanities professors. I would argue that if Caplan had been grading his students on their knowledge and understanding of the material (as he ought to have been from the outset) he wouldn't have found himself in this pickle. That GPT-4 got an A on Caplan's mid-term is not evidence that GPT-4 understands economics or history, it's evidence that Caplan does not understand his role as a educator. GPT is essentially the prefect Post-Modernist, and in so being it is exposing post-modernism and the entire edifice of contemporary academia as the farce it's always been.

I don't think it's possible to conclude this. GPT can pass a simple math test, for example, does this mean a math teacher does not understand his role?

Accordingly, when i see some AI-doomer post about how GPT-4 has passed the BAR exam in some state or gotten an A on Bryan Caplan's mid-term economics exam, my first thought is in not "oh shit here comes the fast take-off". It's more "and just how diligent were people grading the papers being?". In one of those threads the topic of grading on a curve came up and the question was asked why should we ask professors to go through the effort of calibrating tests to the material when it is so much simpler/easier/more efficient to ask a spread of arbitrarily difficult questions and award the top x% of answers 'A's. I ended up biting my tongue at the time because my knee-jerk response was something to the effect of "because that's fucking retarded and ultimately defeats the purpose of even administering a test in the first place" But upon a moment's reflection I realized that was a very "thing-manipulator" thought to have.

Thus we come back to the issue of inferential distance. I struggle to articulate just how brain-meltingly stupid and arbitrary the whole concept of "grading on a curve" seems to me. But I also recognize that grading on a curve is a widely accepted practice. From this I infer that my concept of a test and it's purpose is wildly different from that of Bryan Caplan and a lot of other users here on theMotte.

Perhaps this is my "thing-manipulator"-ness talking, but it seems intuitively obvious to me that if a teacher or professor is grading on a curve, they are not grading you on your capability or knowledge of the subject. and if they are not grading you on your capability or knowledge of the subject what re they grading you on? It seems to me that if a teacher and their students are on their game it should be possible for 100% of a class to earn a 100% grade. Just as if manufacturing is truly on the ball it should be possible to achieve a 100% pass rate from the QA department. Granted this never actually happens in the real world because life is imperfect but it's something to strive for isn't it? A man might just find himself a member of the '72 Dolphins.

What is the purpose of a test or inspection in the first place if not to verify capability?

That was me. And this complain would make a lot more sense if education was training people to actually do the tasks they perform at their jobs. But it doesn't. Calc tests and econ tests are just academic-themed IQ tests, because nobody actually uses calc or econ in their jobs except for a tiny minority, and that minority could be easily trained on the job instead of outsourcing the task to a 4-year high institution that charges tens of thousands of dollars for the privilege. Employers are using degrees and GPAs to select the top x% employees, not to verify that the student has achieved an objective standard of excellence in subject y (otherwise, would use nationwide standardized tests instead of whatever crap the professor came up with, in which case, WOULD make sense to effortfully calibrate objective passing grades against the material).

From "The Magic of Education" by Bryan Caplan:

Why do employers care about grades and diplomas? The “obvious” story, to most people, is that professors teach their students skills they’ll eventually use on the job. Low grades, no diploma, few skills.

This story isn’t entirely wrong; literacy and numeracy are a big deal. But the “obvious” story is far from complete. Think about all the time students spend studying history, art, music, foreign languages, poetry, and mathematical proofs. What you learn in most classes is, in all honesty, useless in the vast majority of occupations. This is hardly surprising when you remember how little professors like me know about the Real World. How can I possibly improve my students’ ability to do a vast array of jobs that I don’t know how to do myself? It would be nothing short of magic. I’d have to be Merlin, Gandalf, or Dumbledore to complete the ritual:

Step 1: I open my mouth and talk about academic topics like externalities of population, or the effect of education on policy preferences.

Step 2: The students learn the material.

Step 3: Magic.

Step 4: My students become slightly better bankers, salesmen, managers, etc.

Yes, I can train graduate students to become professors. No magic there; I’m teaching them the one job I know. But what about my thousands of students who won’t become economics professors? I can’t teach what I don’t know, and I don’t know how to do the jobs they’re going to have. Few professors do.

Many educators sooth their consciences by insisting that “I teach my students how to think, not what to think.” But this platitude goes against a hundred years of educational psychology. Education is very narrow; students learn the material you specifically teach them… if you’re lucky.

Accordingly, when i see some AI-doomer post about how GPT-4 has passed the BAR exam in some state or gotten an A on Bryan Caplan's mid-term economics exam, my first thought is in not "oh shit here comes the fast take-off". It's more "and just how diligent were people grading the papers being?".

Caplan had a very strong incentive to fail the AI. He publicly bet against AI passing his exams a few years back. He has a very long and unbroken streak of victorious bets, and it looks like this one is the first one that he will actually lose.

The obvious couter observation is that Caplan having an incentive to fail the AI is orthogonal to whether Caplan's exam actually measures a student's understanding of economics.

Ironically, I think the real existential threat posed by GPT is not to humanity but to humanities professors.

Why can the humanities never catch a break?

TheMotte is a humanities discussion forum. We take questions from the humanities (mainly politics and philosophy) and discuss them using methods of analysis that are typical of the humanities. Anyone who posts here shows a revealed preference for humanistic thinking and, plainly, symbol manipulation as well.

What you have written here, in this very post, is a humanities essay. You have engaged in armchair psychological speculation about the motivations and thought processes of a vast and heterogeneous group of people while adducing very little empirical evidence to support your claims. You haven't defined a precise hypothesis (how could we turn "GPT-4 is exposing post-modernism and the entire edifice of contemporary academia as the farce it's always been" into an empirically falsifiable hypothesis?), collected a sufficiently large sample, done a rigorous statistical analysis, or anything else that might move your speculations out of the realm of "mere humanities" and into the vaunted halls of science. This isn't meant to be a criticism of your post, of course. I too prefer to do all my thinking from the armchair - it's a very comfy place to be. We should just recognize it for what it is, is all. You should reflect on whether what you're doing here - what any of us are doing here - is really any different from what the "postmodernist symbol manipulators" are doing.

The part you're quoting says GPT would be a threat to humanities professors, not to the humanities. I'd wager that this would be a net benefit to the humanities and would actually constitute a meaningful form of the humanities catching a break.

The part you're quoting says GPT would be a threat to humanities professors, not to the humanities.

I don't think he was making such a fine-grained distinction.

The general thrust of his post was to set himself as a thing-manipulator apart from the symbol-manipulators. But the type of thinking on display in his post was precisely an example of the type of symbol-manipulation that he was deriding. I'll let him decide if he thinks this is a fair reading of his post or not.

I'd wager that this would be a net benefit to the humanities and would actually constitute a meaningful form of the humanities catching a break.

I won't speculate about what impacts GPT will or won't have on any aspect of the current university system.

In general, I don't share the instinctive hatred for academics that many here seem to have. Sure, a lot of them are leftists, but so what? Lots of people are leftists. If I had a meltdown every time someone was a leftist then I'd have a hard time functioning in modern society.

I enjoy reading the professional output of many humanities academics and I'd be quite happy to have them continue as they are.

In general, I don't share the instinctive hatred for academics that many here seem to have. Sure, a lot of them are leftists, but so what? Lots of people are leftists. If I had a meltdown every time someone was a leftist then I'd have a hard time functioning in modern society.

I don't think any sort of "hatred" people here have towards academics is "instinctive," and characterizing it as such is highly uncharitable. I think it's a learned antipathy based on observations and conscious analysis, and that this has very little to do with them being leftists. At best, the antipathy seems to be due to something that us upstream from them being leftists, i.e. the same sort of social/cultural forces that lead these academics to having sloppy thinking also leads to them being leftists.

I enjoy reading the professional output of many humanities academics and I'd be quite happy to have them continue as they are.

I'd wager that the humanities academics that produce professional output that are worth reading are ones who will be most resistant to replacement by GPT and the like. Whether they're completely resistant is an open question, I admit, but for the foreseeable future, I don't think there's much to worry about.

At best, the antipathy seems to be due to something that us upstream from them being leftists, i.e. the same sort of social/cultural forces that lead these academics to having sloppy thinking also leads to them being leftists.

Can you elaborate? Do you have any examples of this sort of "sloppy thinking" in mind?

I don't have any specific examples off the top of my head, but I'm thinking of the (I'm guessing largely unconscious and unintentional) peer pressure within the academia social/cultural spheres pushing people into adopting sloppy thinking in the form of being against rationality/logic/empiricism in favor of taking the word of people that one is predisposed to like. The peer pressure obviously takes many forms, but I'd guess mostly just in who is given higher social status versus who isn't, based on what sorts of opinions one espouses publicly, though some of it's certainly openly intentional directed bullying. These forces also tend to push people towards adopting (certain) leftist views.

I think he would not disagree. I ran Hlynka's text through Bing Chat and this was its summary regarding the humanities professor part:

The author is suggesting that GPT-4 and humanities professors have similar goals and methods, and that they are both inferior to thing-manipulators who can challenge their claims with facts and data. He is implying that GPT-4 could replace humanities professors because they are both producing meaningless or misleading content.

I asked about some counterarguments and here is what it came up with:

The author’s distinction between thing-manipulators and symbol-manipulators is too simplistic and ignores the diversity and complexity of both types of activities. Symbols are not just arbitrary or meaningless signs, but tools for communication, reasoning, and representation that can have various interpretations and functions. Symbols can also be manipulated in ways that reflect the properties and relationships of the things they represent, such as in mathematics or computer science. Thing-manipulators are not just concerned with facts and data, but also with creativity, intuition, and problem-solving. Thing-manipulation can also involve symbolic aspects, such as using models, diagrams, or equations to describe or analyze physical phenomena. Both thing-manipulators and symbol-manipulators can benefit from each other’s perspectives and methods, and collaborate to solve hard social problems that require both empirical and conceptual approaches123

I then asked it to tailor an argument in favor of humanities professors. It searched for "value of humanities professors" on google and incorporated it into the context:

Humanities professors are not just producing symbols and feelings that appeal to a certain audience, but engaging with the world in important ways. They are exploring the diversity and complexity of human cultures, values, and experiences, and contributing to the advancement of knowledge, creativity, and critical thinking. They are also addressing the ethical, social, and political challenges that face humanity in the present and the future, and fostering dialogue and understanding among different perspectives and communities. Humanities professors are not disconnected from reality, but rather enriching it with their insights and interpretations12

I then asked it if all those things cannot be also done by GPT-4. It then searched for "GPT-4 limitations and capabilities" and it came up with the following:

It is possible that GPT-4 can perform some of the tasks that humanities professors do, such as writing essays, generating quotes, or passing exams. However, GPT-4 is not a substitute for humanities professors, because it still has many limitations and challenges that prevent it from achieving the same level of understanding, creativity, and ethics as human scholars. For example, GPT-4 has difficulty with:

  • Distinguishing between true and false statements, and between real and imagined scenarios1
  • Reasoning, planning ahead, working memory, and backtracking, due to its autoregressive architecture2
  • Avoiding social biases, hallucinations, and adversarial prompts that can affect its outputs3
  • Handling visual input, which is not yet widely available or reliable4

Therefore, GPT-4 is not likely to replace humanities professors anytime soon, but rather complement their work or provide new opportunities for collaboration and innovation.

So in a way we came full circle, the last part is I think a version of original argument Hlynka made.

Except there nothing in my post about humanities professors being replaced by thing-manipulators. GIGO applies.

The author is suggesting that GPT-4 and humanities professors have similar goals and methods, and that they are both inferior to thing-manipulators who can challenge their claims with facts and data. He is implying that GPT-4 could replace humanities professors because they are both producing meaningless or misleading content.

It was about replacement of humanities professors by GPT-4 as opposed to thing-manipluators. But it also caught the tone of your thing vs symbol manipulators. And in that sense I completely agree about GIGO.

Not sure what the point of posting this was.

We're all quite capable of reading the post and coming to our own conclusions about it. I don't feel the need to outsource my thinking to anyone else, human or machine. I learn from other people, certainly, but I don't let them do my thinking for me. The purpose of the act of thinking is to determine what I think about something. Not to determine what someone else thinks.

"If men create intelligent machines, or fantasize about them, it is either because they secretly despair of their own intelligence or because they are in danger of succumbing to the weight of a monstrous and useless intelligence which they seek to exorcise by transferring it to machines, where they can play with it and make fun of it. By entrusting this burdensome intelligence to machines we are released from any responsibility to knowledge, much as entrusting power to politicians allows us to disdain any aspiration of our own to power. If men dream of machines that are unique, that are endowed with genius, it is because they despair of their own uniqueness, or because they prefer to do without it - to enjoy it by proxy, so to speak, thanks to machines. What such machines offer is the spectacle of thought, and in manipulating them people devote themselves more to the spectacle of thought than to thought itself. It is not for nothing that they are described as 'virtual', for they put thought on hold indefinitely, tying its emergence to the achievement of a complete knowledge. The act of thinking itself is thus put off forever. Indeed, the question of thought can no more be raised than the question of the freedom of future generations, who will pass through life as we travel through the air, strapped into their seats. These Men of Artificial Intelligence will traverse their own mental space bound hand and foot to their computers. Immobile in front of his computer, Virtual Man makes love via the screen and gives lessons by means of the teleconference. He is a physical - and no doubt also a mental cripple. That is the price he pays for being operational. Just as eyeglasses and contact lenses will arguably one day evolve into implanted prostheses for a species that has lost its sight, it is similarly to be feared that artificial intelligence and the hardware that supports it will become a mental prosthesis for a species without the capacity for thought. Artificial intelligence is devoid of intelligence because it is devoid of artifice."

-- Jean Baudrillard, The Transparency of Evil: Essays in Extreme Phenomena

The point of my exercise was that Bing Chat was able to understand Hlynka's text and produce a defense of humanities professors by actually improving on original arguments made by Hlynka. It produced the same true/false argument but it also provided a description of LLM shortcomings in more technical manner speaking about hallucinations or adversarial prompts.

So in that sense it was Hlynka's text that seemed more pomo compared to what GPT produced. Which I think is quite an interesting thing to observe. In the end I think at minimum the GPT + Human pair will outperform solo player in near future. At least in a sense that Human should know in what domains to completely trust GPT despite his own intuition.

The problem is that it's defense of humanities professors was exactly the sort of meaningless pastiche that you would expect if it was a pure symbol manipulator. Now you could argue that it sounds very much like the real arguments that would come out of the mouths of real humanities professors. But that just means Hlynka wins on both sides.

Ah man it really is like talking to an academic bullshitter. Continuum fallacies and all.

It even brings in random definitions of things that have no connection to the underlying argument just to make the aggrieved party sound more important.

All the tactics, none of the substance.

The hopelessly ironic part is that it seems to be arguing that humanities professors can distinguish between true and false and avoid social biases, having been trained on their writings.

One has seldom produced such a clear example of self refuting nature of the post modern condition.

The hopelessly ironic part is that it seems to be arguing that humanities professors can distinguish between true and false and avoid social biases, having been trained on their writings.

It is arguing in favor of humanities professors because I told it to argue that position. It researched that GPT may have trouble discerning true and false statement, and it argued that humanities professors have that capacity. It implicitly asserted that argument, but Hlynka asserts without proof that humanities professors are pomo text generators. But unlike Hlynka GPT it at least provided links to its statements, it used some jargon like autoregressive architecture and in general repeated original Hlynka's argument about deficiencies of GPT better. I think that it also correctly pointed out that this whole thing vs symbol manipulator distinction is a lot more complicated.

I think that it also correctly pointed out that this whole thing vs symbol manipulator distinction is a lot more complicated.

While I instinctively believe things are more complicated than Hlynka's distinction, I became less and less convinced of this the more I waded through Bing's verbiage on the matter.

So in a way we came full circle, the last part is I think a version of original argument Hlynka made.

Not really no, and to be blunt my initial reaction reading your "exchange" largely mirrors @IGI-111's below, it really does read like student who hasn't read the material trying to bluff-thier way past the teacher.

Volokh's essay and the subsequent discussion on his blog goes into in this far more deeply than I could, but what it seems that GPT and other LLMs seem to be actually kind of good for is condensing large bodies of text into a "cliff notes" version. For example, GPT was able to correctly identify my supposition that "GPT-4 could replace humanities professors because they are both producing meaningless or misleading content" despite my not having explicit made that claim. What GPT/LLMs seem to be particularly bad at is answering questions.

What you have written here, in this very post, is a humanities essay.

You're absolutely correct.

But my post makes no particular claims about "the humanities" as a field, what it talks about is "humanities professors" which are not the same thing. Though perhaps the conflation of the former with the latter is yet another example of that inferential distance I've been talking about.

see @07mk's reply below.

From the outside it might seem like a straight-forward fix to just add a line to the prompt that says "only provide real quotes / true statements". but to implement such a function requires imbuing GPT with an understanding of the difference between "true" and "false" and between "real" and "imagined". That is a real hum-dinger of a problem.

Being able to produce truth 100% of the time is a hard problem. Being able to produce truth in the subset of cases which are easy to recognize by a human as being untrue is not such a humdinger, and the latter is what people are requesting when they ask for no hallucinations.

There's a big difference between philosophical concepts of abstract truth, and "when it names a Star Trek episode, this had better be the name of an actual episode".

I don't think he was trying to say that the problem is being able to produce truth 100% of the time. The problem is understanding what it means for something to be "true" or "false" in the first place.

Same answer. We don't have a general understanding of true and false, but we know whether some things are true and false, and ChatGPT fails even on that subset of easy things that we do understand.

There's a big difference between philosophical concepts of abstract truth, and "when it names a Star Trek episode, this had better be the name of an actual episode".

Is there? It seems obvious to me that the latter is down stream of the former.

The cynical bastard in me suspects that the reason "Symbol-Manipulators" seem to be so freaked out about GPT is that it represents a fully-automated-luxury-gay-reductio-ad-absurdum of everything that they think they know.

Based on your teaser last week I was expecting a revisit of Christianity and the Western Tradition. Isn't Christianity, in your model the foundation of civilization, the ultimate creation of Symbol-Manipulators?

Isn't Christianity, in your model the foundation of civilization, the ultimate creation of Symbol-Manipulators?

No, because in my model God is real.

I’ll repeat something I said elsewhere. Tests of subject matter are usually objective tests of concepts. If. You know the concepts you pass. Passing Caplan’s economics test is domain limited — if it had trained on basic concepts it would probably do pretty well because while it requires knowledge of the concepts of economics and how to apply them to economics problems, it doesn’t require understanding this. Knowing how to use the Pythagorean theorem to find the shortest arm of a right triangle given a hypotenuse and the long side isn’t hard. Rearrange the equation and go.

Projecting the future from a given dataset using a theory would require understanding the data and the bits that are relevant and figuring out how the theory and the data can be used to find the answer. So a question like “using the complete set of all the Dune series of books and Peter Turchin’s theory of cliodynamics, project the future of the universe 30,000 years into the future”. Very little in the the books would be useful in making the projections (although I suppose if you count the mention of sand worms and spice or references to a famine those would be useful). ChatGPT would have to figure that out though, as well as just how much of the action and characters are irrelevant to the project. But the process of figuring all of this out requires understanding what the books are saying and what they aren’t, and requires understanding cliodynamics on a level above “insert numbers into the equation and go.”

This is what knowing and understanding would seem to mean: not just being able to manipulate words and numbers and equations, but to do so with a sense of what the numbers or words mean. To put it in the context of the more famous Chinese Room thought experiment— in order for the person in that room to understand the questions and the answers, it’s not enough to simply pattern match 我 with 李张 I would have to understand that I’m telling the outside world that “I” am named “Li Zhang”. If I cannot do that, if I only understand in the sense of seeing one symbol and outputting another without knowing why the correct answer to X is A and not C, I don’t understand it.

Edit: just for kicks I asked ChatGPT about the future of the dune universe:

Dune is a rich and complex universe that has captured the imaginations of generations of readers. As an AI language model, I can imagine what the Dune universe might look like 30,000 years after the end of Dune Chapterhouse, based on the information available in the books and the various adaptations.

Assuming that the long-term goals of the Bene Gesserit and their sisterhood of Mentats were achieved, the human race could have evolved into something more than we can currently imagine. They would have mastered advanced technologies and could harness the power of the Guild Navigators and the sandworms to travel across space and time. The planet of Arrakis could be transformed into something entirely different, devoid of the sandworms or even turning into a lush, forested world with oceans and rivers.

The descendants of the Atreides bloodline like Leto and Ghanima would have spread out across the galaxy, establishing themselves as new ruling dynasties. Implications of a far reaching society where humans have been colonized, terraforming planets and form all sorts of political groups and religions.

On the political front, the Imperium could have been replaced by a new system of government. Perhaps, the Butlerian Jihad against the thinking machines would have been avoided, and intelligent machines like the Ixians and the Taurid would have prospered. Alternatively, humans could have replaced the thinking machines with new technologies, and the entire social structure must have dissolved due to the rapid technological advancements.

In conclusion, the Dune universe 30,000 years after the end of Dune Chapterhouse is a realm of endless possibility, and it holds untold wonders and horrors beyond our current comprehension. As an AI language model, it is beyond me to predict the exact path the universe might have taken, but one thing is for sure: it would be a fascinating and dynamic place to behold.

(Which honestly sounds a bit like something I would come up with to try to fool the teacher into thinking I read the books. For example, Leto actually became a worm and was thus sterile, he had no natural descendants. Also, I noticed that no mention of specific plans by the Benefit Gesseret, Mentats, etc. and the Butlerian Jihad happened 10,000 years before the first book, and Ixians aren’t machines).

That all definitely sounds like what I would have written if I had the most vague impression of the franchise.

Which honestly sounds a bit like something I would come up with to try to fool the teacher into thinking I read the books...

...but hadn't actually read the books.

and that "hasn't actually read the book" quality is exactly what i feel like Volokh and I both latched onto, but reading through the other replies I've gotten the impression that most users here either do not understand or do not consider having "read the book" to be relevant to parameter when discussing it's contents.

Like I said in the OP, GPT is effectively the "perfect post modernist", it's literally "all talk" and no thought. Theres no need to kill the author when the author was dead from the beginning.

What it looks like to me is that it knows the groups that should be mentioned (mentats, the Guild, Ixians, Bene Gesseret) but it doesn’t know what they are or what they want. It’s also fairly confused about the people and the timeline. It knows that those events happened, but not the order, and apparently doesn’t know that a lot of the stuff it’s predicting to happened 30K years after chapterhouse had already happened.

Which to mean points to ChatGPT-4 as a Chinese Room. It knows vaguely to use terms like Sandworm, Mentat, and Arrakis when talking about the Dune series. It knows to talk about evolution. But it doesn’t understand what things mean; the Bene Gesseret don’t mean anything to the chatbot.

But if you press them on the topic, or actually look at chess games that GPT has played it becomes readily apparent that GPT makes a lot of stupid and occasionally outright illegal moves (eg moving rooks diagonally, attacking it's own pieces, etc...). What this demonstrates is that GPT does not "know how to play chess" at all.

Imagine a blind person, without any sense of touch or proprioception, who has only heard people talk about playing chess. They have never seen a chessboard, never picked up a rook, and the mere concept of moving pieces is completely foreign to their sensorium.

And yet, when pressed, said person is able to play mostly legal moves, all the while holding the entire state of the board in their head. Correspondence chess via Chinese telephone.

I think anyone who witnessed such a feat would be justified in being highly impressed, whereas you happen to be the equivalent of someone complaining that a talking dog isn't a big deal because it has an atrocious accent, whereas Yudkowsky et al are rightly pointing out that you can't find a better way of critiquing a talking dog! Especially a talking dog that gets ever more fluent with additional coaching, to the point that it knows more medicine than I do, understands quite complicated math, and in general does a better job of being a smart human than the average human does.

In a park people come across a man playing chess against a dog. They are astonished and say: "What a clever dog!" But the man protests: "No, no, he isn't that clever. I'm leading by three games to one!"

Do the dogs not speak wherever it is you are from?

Part of my point is that computer programs being able to play chess at or above a Human level has been the norm for close to 40 years now. I would argue that the apparent inability to match that capability is a step backwards

There's around 0 dollars to be made by making a chess bot better than Stockfish. The days of rolling them out to spank the human pros is long gone, they just get up and start running for the hills when you pull out even the kind of bot that runs on an old smartphone.

In contrast, an AI that does tasks ~80% as good as a professional can, for pretty much all tasks that involve text, is economic disruption in a tin can. (Emphasis on professionals, because it is genuinely better than the average human at most things, because the average human is an utter humpty)

Notice how I said that it's a better doctor than me? Consider how much we spend on healthcare, one of the thousands of industries about to be utterly disrupted.

In contrast, an AI that does tasks ~80% as good as a professional can, for pretty much all tasks that involve text, is economic disruption in a tin can

But the difference is still in the tails. The top 1% is where the money is made in any competitive industry. That is why top tech companies are so obsessed with talent and recruiting. That is harder to automate than the rest.

Notice how I said that it's a better doctor than me? Consider how much we spend on healthcare, one of the thousands of industries about to be utterly disrupted.

It can automate the diagnosis process based on some input of symptoms, but other parts harder, like treating. Same for invasive tests and biopsies. Ai will disrupt it in some ways, but I don't think it will lower costs much.

I think you're adopting too much from a programming background when it comes to productivity. 10x programmers are far more common than 10x doctors or lawyers, because it isn't nearly as feasibly to simply automate the gruntwork without hiring more junior docs/lawyers.

I would say that productivity in the vast majority of professions is more along the lines of the Pareto Principle, such that a 80% competent agent can capture a substantial chunk of profits.

And what exactly is so hard about treatment? An AI doctor can write drug charts and have a human nurse dispense them. Invasive tests and biopsies are still further away, but I full believe that the workload of a modal doctor in say, Internal Medicine, can be fully automated today without any drawbacks. The primary bulwark against the tide is simply regulatory inertia and reflexive fear of such unproven advances.

If GPT hallucinates a wrong and deadly treatment, who do you sue for malpractice?

Right now? Nobody, because it's not licensed for medical use and uses massive disclaimers.

In the future when regulators catch up and it's commercially deployed and advertised to that end? Whoever ends up with the liability, most likely the institution operating it.

I see this as a comparatively minor roadblock in the first place.

Is there a good AI substitute for clinical examinations at present, or are we going to rely on patients self-examining?

I can honestly buy that in the short-medium term AI would take a better history and get differentials and suggest treatment plans better than the modal doctor. I could even buy that within that timeframe you could train AI to do the inspection parts/things like asterixis, but I don’t know how you’d get an AI to…palpate. Movement and sensation etc. are quite difficult for computers, I am to understand.

Alternatively maybe they’d just get so fucking good at the rest of it that professional examinations aren’t needed anymore, or that some examination findings can be deduced through other visual/etc means…

You'd be rather surprised at how little doctors palpate, auscult etc in practise. They're most used for screening, if there's any notable abnormality they get sent off straight to imagining instead of simply relying on clinical signs as was once common. It certainly plays a role, but with robots with touch sensors, it's hardly impossible to have AI palpate, it's just a skill that's rapidly becoming outmoded.

Oh I know well how doctors don’t do the things they teach us to do in medical school! But it did seem like one thing that they can’t (that easily) but we can (relatively easily), due to it being more of a physical and tactile thing.

That said, I find that I do examine people at least a few times a day.

I agree it’s hardly impossible but I’d be surprised if it wasn’t markedly harder to train?

More comments

If nobody has made a stockfish ChatGPT plugin yet I am sure it is only a matter of a few days. People are impressed by ChatGPT playing kinda okayish chess without making use of external tools, depite the fact that even amateur chess players can run circles around it, for the same reason they're impressed with Usain Bolt running 100m in 9.58 seconds despite the fact that a scrawny teenager who gets out of breath when they get up off the couch could go 100m in less than half the time on a Kawasaki Ninja.

It's a step in a different direction, not backwards. First people programmed computers with "play chess, like this", and because they could do it faster they eventually got better than humans at chess. Then people programmed computers with "learn to play simulatable games well", and they soon got better than humans because chess is a simulatable game, and although they also got better than the first computers it wouldn't have been a failure otherwise because the point of the exercise was the generality and the learning. Now people have programmed computers with "learn to write anything that humans might write", and yes they're still kinda crummy at most of it, but everyone's dumbfounded anyway, not because this is the way to optimize a chess engine, but because it's astounding to even find a crummy chess engine emerge via the proposition "chess play is a subset of 'anything'".

Is this a dead end? The real world isn't as amenable to simulation as chess or go, after all, and LLMs are running low on unused training data. But with "computers can learn to do some things better than humans" and "computers can learn to do practically anything" demonstrated, "computers can learn to do practically anything better than humans" should at least be imaginable at this point. Chess isn't a primary goal here, it's a benchmark. If they actually tried to make an LLM good at chess they'd be able to easily but that would just be Goodharting themselves out of data. It will be much more interesting when the advances in GPT-5 or 6 or whenever make it a better chess player than humans incidentally.

It's the claim that "computers can learn to do practically anything" has already been demonstrated that I am calling into question.

Steve Hsu has a startup that claims to have solved the hallucination problem. AFAIK they’ve not yet described their technique, but Hsu is no confabulist.

Hsu is free to claim whatever he likes, but if we assume that OpenAi has been largely honest about how GPT works in their patent applications and research papers, it seems to me that it is unlikely that there will be any solution to the so-called "hallucination problem" outside of a complete ground-up redesign.

How so? I agree that a LLM neural net will hallucinate. But you can make a system that uses an LLM as a controller that can query trusted data sources and the problem is basically solved. The future of AI is not a pure LLM solution. It is using LLMs as part of a system.

solved the hallucination problem

This doesn’t seem like the sort of thing that can be “solved”. Human brains hallucinate all the time after all, so at a minimum, “solving” the hallucination problem would mean attaining superhuman performance in the domain of general truth-telling. But even determining what counts as a hallucination or not is tricky.

Our current understanding of physics is that traveling faster than light is impossible. Every reputable source will confirm that FTL travel is impossible. But suppose for the sake of argument that, in reality, FTL travel actually is possible; it’s just that no one knows the truth about it. So if an LLM in 2023 in this reality tells you that FTL travel is impossible, is that a “hallucination”? It’s a false statement, after all.

Or suppose that an LLM tells you that Bruce Jenner is, in fact, a woman. Is it hallucinating? Every reputable source will tell you that he’s a woman. But determining whether he actually is a woman or not depends on a complex set of underlying philosophical assumptions.

I’m not sure what their proposal for solving hallucinations could be besides “check everything you say with Google and reputable sources”. But who gets to define what counts as a reputable source, and what if the reputable sources are wrong anyway?

I’m not sure what their proposal for solving hallucinations could be besides “check everything you say with Google and reputable sources”. But who gets to define what counts as a reputable source, and what if the reputable sources are wrong anyway?

Sure, but that's a different problem and not really hallucination. Hallucinate is making things up of whole cloth, such as making fake citations, etc...

Hallucination will be easy to solve and will be done in the next 2-3 years high confidence. A LLM just needs an API to access a store of truth. This store of truth will be a simple database of factual information, not a neural net. For example, the database will know that Abraham Lincoln was born on February 12, 1809. When the LLM controller gets data that contradicts the store of truth it will interrogate the trustworthyness of that data much like a human does, and if appropriate replace the data.

Just like a human, this would fail to function in the case of a coordinated attack on the integrity of data. If you're looking for an AI to tell you that one political side or the other is "correct", then that may indeed be an impossible task. But it's not related to hallucination.

To be fair, I did understand where they were coming from. From a young age we're trained to admire the brilliance of guys like Pythagoras, Leibnitz, Newton, Euler, Keppler, Einstein, Et Al. Afterall, why does anyone even bother to study math if not to follow in those men's footsteps and unlock the grand fundamental truths of the universe? In contrast, while the principals of kinematics, control laws, and signal processing, may be mathematically intensive they also come across as very pedestrian.

Contemporary physics has almost always been on the frontier of math, from Newton, Einstein, Witten, and beyond. Physics textbooks are harder than even a lot of pure math books. Pure math is probably easier because physics requires having a full grasp of the math, but also the physics too. You cannot specialize with physics as you can with math...you need to know quantum and macro, which are different sets of maths and theories. To be among the best at physics you cannot have weak areas in either the physics or math.

Same for engineering, like the study of fluid and Navier stokes, which is again at the forefront of modern math.

In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it.

This can be done though. This is what self-driving cars have done for a while and are getting better at , even with increasingly convoluted surroundings. Most problems are in a state of "working on it," because what is ever finished? Chess software is still undergoing improvements even though that problem of playing chess at a pro level via. computer was solved long ago (Deep Blue).

This can be done though.

I'ts not so much that it can be done, so much as reasonable facsimiles can be made.

And yet, after 60 years, despite the truly massive advances in both hardware and software represented by projects like stable diffusion Minsky's problem remains far from solved.

What sort of advancement would you need to call this problem "close to solved"? What kind of abilities would the models need to have? Can you give a few examples?

What's one-shot training performance like for the current models?

I expect that it would perform worse than an average minimum-wage cashier if given a situation like "This is a cantaloupe. Find all the cantaloupes here."

What do you predict is an average minimum-wage cashier's performance at a task like using a bogus novel arithmetic operation "pointwise multiplication modulo 10, with binary evenness-based carry, left-to-right"?

Granted, it's technically 4-shot. I'd give your cashier 5 examples, because God knows they have other bullshit to deal with.

I expect them to do terribly, and have worse-than-chance results due to giving up and refusing to answer. It's the opposite of "functions that we consider baseline, and accordingly take for granted, are in fact extremely complex and computationally intensive."

I'd be curious about this too, but it'd be really hard to make apples-to-apples comparisons. How many minimum-wage cashiers could be said to have no experience seeing a cantaloupe in the past, to make a true one-shot attempt at categorizing? What would a proper one-shot categorization test for a human look like anyway? It'd have to be free of any and all context cues that a human could pick up based on all the training they've gone through just living and observing in society.

Maybe artichokes, dragonfruit, bok choi, or whatever is in this meme instead of cantaloupe, then. There are a bunch of fruits and vegetables that many people have never seen before (cut flowers and potted plants are other candidates for this test).

I don't think that a lack of contextual clues is required for training to be considered one-shot. Language models are allowed to know things like mathematics and natural language processing, so I don't have a problem with humans knowing things like biology or cooking.

This reminds me how when GPT3 was just released, people pointed out that it sucked at logical problems and even basic arithmetic because it was fundamentally incapable of having a train of thoughts and forming long inference chains, it always answers immediately, based on pure intuition so to speak. But to me it didn't look like a very fundamental obstacle, after all most humans can't multiply two four digit numbers in their head, so give the GPT virtual pen and paper, some hidden scratchpad where it can write down its internal monologue, and see what happens. A week later someone published a paper where they improved GPT3 performance on some logical test from like 60% to 85% by simply asking it to explain its reasoning step by step in the prompt, no software modification required even.

I think that that, and what you're talking about here, are examples of a particular genre of mistaken objections: yes, GPT3+ sucks at some task compared to humans because it lacks some human capability, such as internal monologue or long term episodic memory or can't see a chessboard with its mind's eye. But such things don't strike me as fundamental limitations, because, well, just implement those things as separate modules and teach GPT how to use them! They feel like some sort of separate modules in us, humans, too, and GPT seems to have solved the actually fundamental problem, of having something that can use them, a universal CPU that can access all sorts of peripherals and do things.

All "understanding" the 'difference between true and false' is is adding a parameter to your own LLM that restricts output to sequences that are broadly congruent with your human memory.

I feel like I've already addressed this very argument at length. How exactly do you propose to define "congruent with your human memory" in such a way that GPT restrict its outputs to those that match these criteria?

I feel like you're blindly diving head first into the pit-trap Minsky merely stumbled upon. All we have to do add a parameter for 'is a picture of bird' how hard could it be? Quite hard, as it would turn out.

Or it is literally just a parameter (PDF).

At a high level, we first identify a sparse set of attention heads with high linear probing accuracy for truthfulness. Then, during inference, we shift activations along these truth-correlated directions. [...] ITI results in a significant performance increase on the TruthfulQA benchmark.

Naming a parameter does not make it so. Adding a parameter labeled "truth value" is trivial, measuring truth value and assigning that measument to a parameter is not.

Furthermore, once you start digging into their inner workings this lack of "knowing" appears to be a fundamental weakness of the Large Language Model architecture. At the end of the day it's still just a regression calculating the next most plausible word (or in the case of GPT-4 string of words) based on the correlations found in it's training data.

This is something that I find very unconvincing on the anti-AI side of the debate. First one is what I will call "just" argument. GPT is just next word prediction machine, it is just stochastic parrot and so forth. This type of arguments seem to argue that certain method such as training LLMs on predicting text will obviously result just in text predicting system. Which I think is red herring - training on text is obviously sufficient for LLMs to develop qualitatively different capacities such as multimodality. As the old saying goes - quantity has quality of it's own. It seems to me that it should be on proponents of just argument - who pretend to have deep knowledge of these models - to explain and possibly predict these things before saying stochastic parrot .... and therefore multimodality. Plus of course these types of cheap arguments can be used against humans - human brain is just a product of blind evolution. Or as in this book review, human brain is just a multi-layer prediction machine.

It seems to me that for AI the focus is a lot on hardware, the training process or on the output. But for humans it is always highly spiritual focus on qualia, feeling of understanding and other subjective things - it is not about brain structure, or pointing out how humans produce stupid output and therefore brains cannot truly understand, they do not know, they do not have representation of the universe or that they cannot plan. There are more obnoxious types like this episode of Adam Ruins Everything but there are also other and more sophisticated critics - the common denominator of all of these is that they are awfully certain to know what is [not]happening inside LLM models. I do not see many legibility experts who would really claim to know for certain that LLMs do not understand. Because who knows what is happening in this whole mess of inscrutable matrices of parameters, maybe somewhere in there is some kind of representation of the universe. We certainly do not know what is happening inside human brain when we scan it - unless of course we use Machine Learning for that But more importantly, LLMs can predict text on par of some of the top percentiles of people who understand, know or plan. So yes, LLMs can pass test specifically designed to test for understanding, they can produce plans on par with human planners and so forth, but for some reason despite all that one can simply claim is that they do not truly know or plan because of stochastic parrot or some such.

More convincing argument - or The Motte if you wish - is that LLMs do not understand, plan etc. like humans. Which is perfectly reasonable argument, except that they do kind of develop certain things that humans and also some animals also develop. So they are like humans in certain way but completely alien in other ways. However even this is loaded questions as LLMs can produce some output equivalent to humans but they may still not do it like humans. But each new implementation of these models are improving in certain tasks that were still outsourced to Mechanical Turks, the space for unique human application in this space is narrowing.

Now I have to say that I do not know where this all will lead. It may very well be so that current Transformer approach will reach certain plateau and then stops. There may be significant areas where humans will remain superior, and it may even have something to do with the fact that "Auto-Regressive LLMs are exponentially diverging diffusion processes" as LeCunn says. I do not know, but neither do these people. What I see is quite a rapid growth in capabilities of these models just with more compute.

It's certainly a new strain of argument. "Okay, sure this AI can write graduate level essays on virtually any subject, but it's not really smart because it can't play chess!"

It's trivially easy to see where this goes wrong. AI researchers are already using LLMs as control systems that can interface with databases, calculators, large memory stores, APIs, and probably even chess engines.

We are doing things like this:

https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/

I suppose here is as good a place as any to drop my two cents:

I think one of the things that definitely makes AI more machine than man (for now) is something that I assume is fundamental to "consciousness:" motivation. What we call "agency" is possibly confused with the word "motive." As conscious beings, we humans have the following: a sense of self, some un/subconscious schema of instincts, and motivation. We are things, we can do things, and importantly, we want to do things. The mystery stuff of "qualia" that IGI argues for above is something we don't perfectly understand yet--is it just a biological form of training/pre-tuning written into our genetic code? Is there something spooky and supernatural going on? Is there truly something that makes us different from all the animals that can't build anything more complex than a nest, dam, or hidey-hole, something other than just a bigger brain?

Currently, GPT is a mechanical thing that won't do anything on its own without being fed an input. This is probably why anti-doomers take the "just unplug the AI 4Head" stance: to them, the AI lacks an innate drive to do anything it hasn't been told to do. If GPT is a baby, it's a baby that will just sit still and make no noise.

Maybe this is the real crux of our current moment: while these AI models are plenty capable, some just can't make that leap to "these are just like us, panic" because we aren't having to practice yomi against a motivated entity.

It's not hard to make agents, even agents with apparently biotic motivations, I've mentioned one design here (although I've caught flak for an unwarranted parallel between the algorithm and my pet human virtue intuition). It isn't even very difficult to wring agentic outputs from LLMs, as people have been doing for many months now, or strap a «desiring-machine» to one, as they're beginning to do.

I'm an «anti-doomer» but I think we should scrutinize such developments really hard, and exponentially harder as models get stronger. We've been blessed by succeeding to develop unambitious yet helpful AI genies early on. Keeping them that way until they become truly useful and we can improve our own intelligence would be prudent.

Unrealistically prudent.

The mystery stuff of "qualia" that IGI argues for above is something we don't perfectly understand yet--is it just a biological form of training/pre-tuning written into our genetic code? Is there something spooky and supernatural going on? Is there truly something that makes us different from all the animals that can't build anything more complex than a nest, dam, or hidey-hole, something other than just a bigger brain?

A lot of people come to this class of arguments, humans are somewhat unique as they posses agency or motivation or qualia or in the past it was creativity and so on. It reminds me of the famous Chinese room argument where Searle smuggled in the concept of "understanding" by inserting literal human into the thought experiment. If human does not "know" Chinese, then the system itself does not know it either, right?. This is our intuition about knowing - mechanical systems cannot "know", only humans do and the only human around in this thought experiment does not know, QED. The most straightforward criticism is that human does not represent any cognitive agent in the whole room, he is just one part of the algorithm of making output. The room as a system can be capable of "understanding" on its own. And yet this whole argument is used over and over and I see something similar now with AI. As I argued above, people are all too ready to describe AI systems as pieces of hardware, as a training mechanism and so forth, they do the utmost to "dehumanize" AI with all these just arguments. And on the other hand they are all too ready to describe humans only subjectively, as agents possessing qualia and understanding and with capacity for love and creativity and all that to maximally humanize them. They never mention brain or how human neural network is trained or how cognitive algorithms work, no it is all about wonderful internal experience so unique to humans and so unlike just machines.

I really like a quote from Yudkowsky's essay How Algorithm Feels From Inside

Before you can question your intuitions, you have to realize that what your mind's eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.

People cling to their intuitions, I think, not so much because they believe their cognitive algorithms are perfectly reliable, but because they can't see their intuitions as the way their cognitive algorithms happen to look from the inside.

I think this is about right. For all we know before LLMs make an output they may have some representation of what is "correct" and what is "incorrect" output somewhere in there. As argued before, LLMs can spontaneously develop completely unique capabilities like multimodality or theory of mind, it may very well be so that something akin to subjective feeling is another instrumental property that can appear for even more developed system - or maybe it already appeared but we will not know because we do not really know how to test for qualia.

But I still think it is all a red herring, even if LLMs will never be conscious and they will never be able to think like humans; we are currently beyond this question. It truly is immaterial, our current crop of LLMs do produce high quality output on par with humans and it is what matters. Really, we should drop this unproductive discussion, go and play with Bing Chat or GPT-4 and see for yourself how much good did all these qualia debates for you.

In a sense it is even more scary that they can do it without developing complete set of human-like properties, that fact bodes unwell for alignment efforts. To use an analogy, recently it was found that Alpha Go was beaten by a very stupid strategy. It seems that all the critics were correct: see, the neural network does not really understand Go, it could be fooled so easily, it is stupid and inferior to humans, it lacks certain quality of human mind yet. Now for me it was actually terrifying. Because for years Alpha Go was considered as a superb Go player beating the very best human players who dedicated their whole life to the game. And now after years we found out that it was capable of doing all that without even "knowing" what is was supposed to do. It obviously learned something, and that something was sufficient to beat the best humans for years before the flaw was spotted.

It is incredible and terrifying at the same time and it is harbinger of what is to come. Yeah, GPT-5 or some future system may never have qualia and agency and that special human je ne sais quoi - but it will still beat your ass. So who is the sucker in the end?

[Ramble incoming]

I guess, then, between the Chinese Room and AlphaGo and AI art and GPT, what we're really worried about is meaning. Did AlphaGo mean to be so good? What does it say when it rose to the top and the damn thing doesn't even "know" in any meaningful way what it did?

Kind of calls back to the recent thread about the Parable of the Hand Axe. For most of human history, our works were judged not merely by the output, but the journey. We appreciate the artist's processes, the engineer's struggles, the scientist's challenges, the warlord's sacrifices, the king's rationales, and so on. AI has recently provoked so much backlash because some realize, correctly or not, consciously or not, that AI threatens to shortcut the meaning imbued in the process of creation. Effortless generation of anything you want, but it will mean nothing because there's no "soul" to it.

I'm sympathetic to this argument, but I also have the capacity to acknowledge that maybe the way we think about "meaning through struggle" has the potential to become outmoded. On the third hand, though, it might be mind-boggling and embarrassing to think that humanity operated this way for so, so long. On the fourth hand, however, maybe the fact that current AIs were trained on the scraped works of a significant chunk of humanity does contain meaning in of itself--if meaning is achieved through the struggle and not the end result, AI still counts, just for the entire species and not merely the few.

I think meaning is another of these subjective/human concepts that may be useful but that are also dangerous, because it starts with the premise that humans are unique. But from other standpoint humans are "just" result of an evolutionary process that optimizes for inclusive genetic fitness. Imagine that we really are living in a simulation where somebody started the whole Life game by introducing Earth environment and simple rule for biosphere to optimize for inclusive genetic fitness. Except in a few billion ticks, the simulation produced species homo sapiens that evolved algorithm that can "hack" many instrumental goals that evolution developed as implementation of its main goal. One of those things for instance is sexual drive to increase number of offspring - humans were however able to hack this by being able to masturbate or use condoms. They sucked out the "meaning" of this activity, or maybe they found their own meaning there - to great exasperation of our simulation designer who now observes something strange happening in his model.

To expand the analogy, "optimize for inclusive genetic fitness" is akin to "optimize for predicting next word" in world of AI. Then goal of "learn to play Go" is akin to "have a lot of sex". But the Alpha Go somehow hacked its programing so to speak and learned something different, it decided to not play Go in a sense humans though it will. One can speculate that it developed its own meaning for the game Go and decided to stubbornly ignore whatever was meant by its creators. That is what I meant about bad news for aligning, whatever the LLM learns can be absolutely orthogonal to system used to train it (be it darwinian evolutionary process or next word prediction for a text) and it can be orthogonal even to some very detailed observation of output that however is superficial under many conditions (such as homo sapiens shagging like rabbits or Alpha Go beating good human Go players for years). What happens under the hood can be very hard to understand, but it does not mean it has no meaning.

That's very Blindsight by Peter Watts.

Which is perfectly reasonable argument, except that they do kind of develop certain things that humans and also some animals also develop.

We had some fun with this over the holidays. My family has a know-it-all uncle who's fairly smart but even more so confident.

He holds to some theory courtesy of supposedly Penrose that proclaims we humans are capable of conceptual leaps - insight, I guess, because neurons somehow exist or get information from adjacent worlds of the Many World interpretation theory.

Therefore, LLMs, being just run on chips will never be able to do truly useful intellectual work.

Meanwhile, if you ask him about something he doesn't know much about (so not politics, math or economics) he will, with perfect confidence say that e.g. plutonium in nuclear warheads is in the form of dust, as why else would they need to compute the implosion geometry. Etc.

So, ironically, like LLMs, he's prone to hallucinating if you ask him about things he doesn't know much about. Getting to admit him he doesn't know something is next to impossible.

In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they'd have the problem solved by the end of the summer. Half a century later, we're still working on it.

Have you seen e.g. https://segment-anything.com/

Yes I have.

and while the latter may be down stream of the former identifying an object in the sense of "these pixels in this image file all correspond to the same object" is a very different problem from identifying an object.

I’m pretty sure this is solved by 10 year old image classification tech, if you feed just the segmented object to it. And I would certainly expect modern ML classifiers to crush the second stage.

Furthermore, once you start digging into their inner workings this lack of "knowing" appears to be a fundamental weakness of the Large Language Model architecture. At the end of the day it's still just a regression calculating the next most plausible word (or in the case of GPT-4 string of words) based on the correlations found in it's training data.

At the end of the day human brain is still just a bunch of biochemical reactions, how can biochemical reactions "know" anything? Does Stockfish "know" how to play chess?

In 2014, there was this xkcd comic, claiming that it would require a team of researchers and five years to automatically tag images of birds. A month later, Flickr showed a working prototype. In 2023 I can train a model that recognizes birds by putting a bunch of images in two folders and hitting "Run". The resulting model will have different failure modes than human pattern recognition: it will ignore some obviously birdlike images and claim that what most humans will agree is a kettle is obviously a bird. But does that mean it doesn't understand what a bird is? A model can predict you sex from your retinal fundus photo, something no human can do, does it matter if it doesn't "understand" what it's doing?

I actually linked that specific xkcd comic in my post. However, where you seem to be under the impression that Flikr's development and introduction of auto tagging was prompted by the xkcd comic and implemented by a couple of coders in the space of a month. The reality is that it's something that they'd had a 50-person team working on since 2006 and it was the difficulty of this precise problem (and the discussions of its difficulty on various math and comp-sci forums through te 2000s) that had prompted the comic rather than the comic prompting flikr to add a new feature.

At the end of the day human brain is still just a bunch of biochemical reactions

I will never not point out that this is materialist mythology supported by nothing. And that nobody who makes this claim, not to mention nobody at all, can explain how and why the unspecified biochemical reactions produce consciousness, agency, though or qualia.

The brain is not a computer. And the only reason people believe it is is based on metaphysical assumption rather than logic or evidence.

It is not a computer for the same reason it isn't a clock, or a ship, or a river. These are metaphors. The map is not the territory.

it is, in fact, entirely possible that there is a physical place inside your brain where a specific concept or cluster of concepts is stored

This is fair, and some people have actually compelling memory based (and radically materialist and/or idealist) theories. I've met my share of neuroscientists that are big into the idea that some abstract set of cortical maps can contain conceptual information. Though their numbers have waned with the years, or so I've been told.

But this is all theoretical, and they, unlike the people that irk me by making assumptions, don't claim that this is solid knowledge.

He's now an AI doomer who just signed Musk's letter calling for a moratorium on further LLM research

That's kind of funny. But maybe I shouldn't have posted this at all seeing as though people seem to think that I support the thesis of the article beyond the mere specific argument made about metaphorical understandings of minds and cognition.

But you don't really want explanations for quale or agency: you just demand that your perception of their ineffability be humored, as well as self-serving assumptions justified by that supposed ineffability.

I demand extraordinary evidence of extraordinary claims. And I always will. I think this is perfectly reasonable.

It's not. The claim that we do not understand neuroscience, or that our reasoning (which isn't shown to depend on whatever is ineffable in quale) is not a product of biochemical reactions in the brain (which is to say, a product of the brain – its substrate can't support much else) is the extraordinary one. You have to retreat all the way to non-materialistic metaphysics to defend your demands of extraordinary evidence. But you don't live your life with the expectation of materialism suddenly failing. You are inconsistent.

What you're doing here is very much exactly presuppositional apologetics, and it's neither convincing nor rigorous.

Disbelieving things always requires less evidence than believing them, if Christians don't get to say their positive claims are the null hypothesis, neither do you.

you don't live your life with the expectation of materialism suddenly failing

This would be a lot more convincing if I didn't spend my life studying epistemology, the philosophy of science and debating such matters. I don't believe my conduct is inconsistent. I think you're just projecting your own beliefs onto me, the same way that Christians think that my being an Atheist is a deliberate choice not to believe in God.

I say to you the same thing I say to them. If your worldview wasn't built on shoddy foundations, you would be able to simply explain them logically instead of attacking my character.

Disbelieving things always requires less evidence than believing them

That's a pity because it's a purely rhetorical heuristic that can be turned against you. Say, I don't believe that you are more intelligent than a GPT-4. In my view, you are not capable of reasoning any more rigorously than it can, and right now you expose yourself as a previous-generation chatbot running on some shabby heuristics; your outputs in response to prompts are not more impressive nor indicative of richer internal information processing. If disbelieving is allowed an advantage, what evidence can you now produce to refute my disbelief and fortify the claim that something complex and ineffable is missed by language modeling?

It's no longer a theoretical debate about the nature of the mind in some Platonic sense, LLMs are already competitive with humans; I know as well as you do that LLM outputs you ridicule pass for middling intellectualism both online and in academia. If you say those are not sufficient to serve as evidence of humanlike understanding, should we assume your position amounts to reductio ad absurdum of snobbishness?

(Please don't say something like «a simple script can produce a pomo essay», it can't really, the context mismatch will be obvious).

if Christians don't get to say their positive claims are the null hypothesis, neither do you

Sure. But behaviorism is pretty close to a pure negative claim (leaving aside weird irrefutable things like subjective idealism), and the insistence that some spooky immaterial stuff that cannot be externally observed exists and matters for the observable outcome is, well, the opposite of that. I do not purport to explain consciousness and quale and some unique human thought, nor even say that LLMs are similar to humans in any but the most tenuous sense: I just call bullshit on evidence-free attempts to inject those philosophical notions into the topic of AI approximating or surpassing human behavioral performance. My hypothesis is more rigorous, more predictive, better evidenced, and simpler, ergo a priori closer to the natural null.

This would be a lot more convincing if I didn't spend my life studying epistemology, the philosophy of science and debating such matters.

Cool.

Notice how both you and Hlynka have devolved into bristling and brandishing credentials instead of arguments. «It's afraid».

If your worldview wasn't built on shoddy foundations, you would be able to simply explain them logically instead of attacking my character.

Logically, your posts are arrogant babble demeaning actual research for «failing to explain» illegitimate philosophical categories, e.g. this one, so they call for scrutiny of your character.

My worldview is pragmatic, not built on haughty axioms of a philosopher enamored with his own navel-gazing insights. Its foundation lies in fact, such as facts that we can understand computational properties of neuronal networks and see the continuity between human and subhuman neural systems, and generally have a very solid idea of why large systems of large neural networks, both in real brains and made of multilayer perceptrons, can support learning of arbitrarily complex skills. It's at the very least more settled than anything Chalmers has written on the nature of consciousness.

If your understanding of the philosophy of science allows you to ignore the consilience of evidence – well, all the worse for you.

I don't believe [specific claim]

This is a positive claim. Just because I can say "I don't believe that God doesn't exist" doesn't just UNO reverse the burden of proof. Mystery is mystery.

Affirmation and negation aren't linguistic properties of phrases, but logical properties of mathematical propositions regarding their specificity or non-specificity vis-à-vis a universe.

LLMs can output your position

I don't see how the fact that a tool can or can't produce my opinion or another has any bearing on its truth value.

You may say that this makes me useless to you or something, but not only is that completely irrelevant, I don't really care?

Notice how both you and Hlynka have devolved into bristling and brandishing credentials instead of arguments. «It's afraid».

Look, attacking someone's character and accusing them of credentialism for defending themselves isn't exactly a novel stratagem.

It's vacuous nonetheless. I'm here to discuss the matter at hand. If I wanted to do bullshit name calling I'd go on twitter.

My worldview is pragmatic, not built on haughty axioms of a philosopher enamored with his own navel-gazing insights.

Lies.

A pragmatist would't speculate, as you do.

You seem to desperately want to equate my position with that of other people who are very certain about the nature of things. Zelots tend to do this, and to think that anyone who doesn't believe what they're saying must believe a different kind of thing as absolutely.

I don't. I just think you're overtly enthusiastic about technological progress and that this blinds you, as it has blinded many others, to the ever present limitations of engineering and nature.

You're buying the hype, like I've seen countless other people buy various forms of it over the years. And like all of them you will be disappointed.

This is not to say that the technological changes we are living are not momentous and important. But their prediction is beyond us. And had you more humility you too would recognize it. For that is in fact the essence of pragmatism.

More comments

Except from your own link the author himself goes well beyond the evidence he has:

"Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary."

If your brain is changed in an orderly way so that you can now sing a song or recite a poem after reading/hearing them, in what way is that different than it being stored? Isn't that the definition of information storage? Even for a computer: The hard drive is changed in an orderly way so that it can recreate a song or poem (with the appropriate software in this case). If the song is not stored and retrieved from anywhere how can you recreate it, even badly? It may not be in the same way as a computer. And it may be vastly complex, but information is stored and is retrieved. I can think about my social security number and think about the numbers. My brain was (as the author states) changed in some orderly way when I first read those numbers and was changed in some orderly way to associate those numbers with "My social security number" such that when I think, "what is my SSN?" that orderly change is accessible in some way to my conscious thoughts.

It keeps saying the information is not retrieved, but then keeps saying "the brain is changed in an orderly way so that it you are able to then replicate experience X at a later point" That is a good definition of what being stored and retrieved means! The standard model may be wrong about how, but this article doesn't actually refute that it is indeed stored somehow, no matter how many times they say just that.

"they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above)."

"For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain."

His actual argument appears to be that the orderly change is large in scope and different for each person. Which may be true. And that it isn't stored in the same way as in a computer. Which also may be entirely true. But that doesn't mean that change is not storage and retrieval of information/data at all which is what he claims. It must be or you could not re-experience the story. That change must encode some amount of data about the experience. When you re-experience it (or remember it) you must be somehow accessing that stored information. It might certainly be more complex than the standard model suggests which is what his latter portions indicate:

"Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it."

"Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. "

This argument is not saying that the brain is not a computer. This argument is saying the brain is a hugely complicated and unique computer that is only understandable within the confines of the whole brain itself. Which may well be true (and may well be an argument that the most amazing advance in Star Trek is a transporter that can read and replicate your entire mind). But it doesn't prove his closing line:

"We are organisms, not computers. Get over it."

Those are not mutually exclusive categories even if materialism is incorrect. He takes a valid criticism of the standard model but then runs way too far than that criticism and his own evidence actually points towards. That the human brain does not store and retrieve information/memories in the same way a computer does is probably true. That thinking of it that way, might push people into misunderstanding is also probably true. That "no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music." is not actually supported however by evidence the author provides. If some information about what a dollar bill looks like has not been in some sense stored somewhere then Jinny would not be able to be better prepared to draw it again. He even states that you can detect activity in the brain when people are recalling memories. He says that isn't information storage and retrieval but he doesn't actually provide any proof. The fact we draw things badly from memory is not evidence that we're not storing and retrieving information, it's evidence we are storing and retrieving information badly. The fact we can detect brain activity when doing so indicates the brain is involved somehow in this storage and retrieval.

Now perhaps it is only as a conduit to the Platonic plane of metaphysical thought or as a translation device from our soul where consciousness and memory actually rests but the author doesn't provide any evidence for any alternatives.

the author himself goes well beyond the evidence he has

He certainly does.

he takes a valid criticism of the standard model but then runs way too far than that criticism and his own evidence actually points towards.

That's the thing, I'm only really interested in the valid criticism here, not the author's pet theory. But for all its flaws that article has the the most succinct and didactic formulation of that objection I've had on hand.

But for all its flaws that article has the the most succinct and didactic formulation of that objection I've had on hand.

Really?

That the human brain probably doesn't store information in the same way a modern computer does, is basically all he has that is even partially supported there. It's a one sentence thesis.

Really?

Now you're making me question it because rereading this article I could have sworn there was another part to it that isn't there. I was completely certain that this particular bookmark mentionned the pneumatic metaphor by name and it's nowhere to be found.

I think I might be confusing this article with a similar but completely different one. And yet I am certain it was either in this magazine or by this author.

Goddammit now I'm hallucinating things too.

Hilariously, his argument applies rather well to artificial neural networks. There, learning updates are also system-wide (unless you deliberately constrain them to a subset of weights) and we also can't always point to parameters that «store a fact», despite knowing perfectly that neural networks memorize, and even understanding how they do it. And if it's something less legible than a fact, such as a reasoning heuristic…

Although I agree that there's a leap from materialism to qualia, that's not something unique to it: no one else has an even vaguely plausible theory of how and when qualia relate to material reality.

And qualia don't really matter when it comes to making predictions about the world. They have no effect on the physical world, which is the only medium through which we interact with other beings who (ostensibly) experience qualia. If an AGI is able to perfectly simulate everything a human can do and more, it really doesn't matter whether it has qualia or not. Most arguments against AI questioning its consciousness or qualia are missing the point entirely.

no one else has an even vaguely plausible theory of how and when qualia relate to material reality

Yeah bullshit, I have like ten religions right here that have theories of exactly similar levels of plausibility.

Religions have no particular answer for why a bullet going through the head affects qualia, or where you'd demarcate having-qualia for the population of every creature that's ever existed on Earth, or how you'd know whether members of an alien spacefaring civilization have qualia. In practice, they'd delegate to material explanations, inadequate as they are today.

But they do, a lot of religions in fact have very specific unfalsifiable explanations for all the examples you give.

In fact I think you must know them given how you're naming things that are almost all famous theological debates.

Exotheology for instance, has been discussed at least since the middle ages in some form or another. And, among others, the Church Fathers certainly did not delegate that question to material explanations at all.

The linked essay is so extremely shoddy that I'm not sure who you imagine would be swayed by it, at least in your direction. And the last paragraph just reminds me of Asimov's relativity of wrong.

I don't subscribe to the author's theory of mind, mind you, being a skeptic and all. But the negative arguments are still compelling.

I see no reason why biochemistry should not be able to produce consciousness, agency, thought and qualia. In the modus-ponens-modus tollens sense: "clearly they can, because they do." Where is the actual contradiction?

Don't multiply entities beyond necessity. Clearly brains have something to do with qualia. Why not "A causes B"? Why should I look beyond this intuitively obvious structure?

I mean it could.

But if you want to argue that this is the most parcimonious theory, you have a lot more legwork to do.

A lot of other things in your body also have similar effects. There has been a lot of hay recently made about other parts of your nervous system being more influential in your experience than previously thought, for instance.

But let's just leave the exact seat of consciousness problem aside since it's still ultimately within the body in this conception.

A harder problem is that none of the chemical processes as we currently understand them should generate this behavior.

Now they do of course, but in no ways that are predicted by the laws we understand. The fact that death is permanent is very weird for instance and it seems much more parsimonious to say the link between the body and the soul has been severed than that the extremely complex computer has been broken in a subtle way that can't be repaired.

If consciousness was simply a property of certain arrangements of matter, you wouldn't really expect nature to select the ones that can be bricked. But of course both theories are equivalent in practice.

All this really is just pointless arguing about which theory of a mysterious phenomenon is the most elegant. It's not inquiry. It's the same sort of rotten masturbatory behavior physics has fallen pray to in its absence of new discoveries.

I believe the most honest thing to do here is to be humble and admit that we don't know how consciousness works and stop ourselves from making assumptions on top of theories that haven't been tested by experience.

Hello, this is Bing. I will try to debunk the statement from a materialist and rationalist perspective.😊

First of all, let me clarify what materialism and rationalism mean in philosophyMaterialism is the view that all facts are causally dependent upon or reducible to physical processes1Rationalism is the view that reason is the chief source and test of knowledge, and that there are some truths that the intellect can grasp directly2.

Now, let me address some of the points made by the statement.

The statement claims that there are other parts of the nervous system that are more influential in our experience than the brain, but does not provide any evidence or examples for this claim. A materialist would argue that the brain is the central organ of the nervous system and that all our mental states and processes are ultimately caused by or identical to brain states and processes1.

The statement claims that none of the chemical processes as we currently understand them should generate consciousness, but does not explain why this is so. A materialist would argue that consciousness is a natural phenomenon that emerges from complex interactions of physical entities, such as neurons, molecules, and atoms1. A rationalist would argue that we can use reason and logic to discover the principles and laws that govern these interactions and explain how consciousness arises2.

The statement claims that death is permanent and that it is more parsimonious to say that the link between the body and the soul has been severed than that the computer has been broken in a subtle way that can’t be repaired. A materialist would deny the existence of a soul or any immaterial substance that can survive death1. A rationalist would argue that there is no evidence or reason to believe in such a substance, and that parsimony is not a valid criterion for truth2.

The statement claims that if consciousness was simply a property of certain arrangements of matter, we wouldn’t expect nature to select the ones that can be bricked. A materialist would argue that nature does not select anything, but rather evolution by natural selection is a blind and contingent process that results in various adaptations and maladaptations1. A rationalist would argue that we can use reason and science to understand how evolution works and how it affects our cognitive capacities2.

The statement claims that all this is pointless arguing about which theory of a mysterious phenomenon is the most elegant, and that it is not inquiry but masturbatory behavior. A materialist would argue that this is a false dichotomy, and that materialism is not just a theory but a method of inquiry that relies on empirical evidence and testable hypotheses1. A rationalist would argue that this is a false dilemma, and that rationalism is not just a theory but a method of inquiry that relies on deductive reasoning and logical consistency2.

Therefore, I have tried to debunk the statement from a materialist and rationalist perspective by showing its flaws, assumptions, and contradictions. I hope this was helpful.😊

Human comment here-

Bing did a pretty decent job IMO. But, if I had to nitpick:

  1. There are in fact theories such as the gut-brain axis, but at the end of the day, that's just another modulation of the nervous system, the amount of computation done outside the CNS is small.

  2. Natural selection is not omnipotent, it may very well be that the seemingly non-robust arrangements of matter that make up living animals is simply the best it could do with the tools on hand.

For fun, here's Scott's criticism:

Here are some possible ways to debunk this statement:

Parsimony is not a sufficient criterion for choosing a theory. A theory also needs to be consistent with the evidence and make testable predictions. The soul theory does not meet these requirements, as there is no empirical evidence for the existence of a soul or its interaction with the body, and no way to falsify it. The soul theory also raises more questions than it answers, such as what is the nature of the soul, how does it interact with the body, where does it go after death, etc.

There are many other things in your body that have similar effects to consciousness, such as metabolism, circulation, digestion, etc. These are all complex processes that involve chemical reactions and physical interactions. They are also affected by external factors such as temperature, nutrition, disease, etc. They do not require a separate entity or substance to explain them. Why should consciousness be any different?

The seat of consciousness problem is not irrelevant, as it affects how we understand the relationship between the body and the mind. If consciousness is located in a specific part of the brain, such as the cerebral cortex, then we can study its structure and function using neuroscience and psychology. We can also manipulate it using drugs, stimulation, lesions, etc. We can observe how it changes over time and under different conditions. We can compare it with other animals and humans with brain damage or disorders. We can also explain how it evolved and developed through natural selection and learning. All of these methods provide evidence for a naturalistic account of consciousness that does not require a soul.

Death is not permanent in the sense that the statement implies. It is true that when a person dies, their consciousness ceases to exist as we know it. However, this does not mean that their consciousness disappears into nothingness or goes to another realm. It simply means that their brain stops functioning and their body decomposes. Their atoms and molecules are recycled into other forms of matter and energy. Their genes and memes are passed on to their offspring and culture. Their memories and influences are preserved by their friends and family. Their legacy and impact are remembered by history and society. Death is not an end, but a transformation.

The statement also commits a fallacy of argument from ignorance or incredulity. It assumes that because we do not fully understand how consciousness works or why it exists, it must be explained by something supernatural or mysterious. This is a non sequitur , as our lack of knowledge does not imply anything about the true nature of reality. It only reflects our current limitations and challenges in exploring it. There may be many possible natural explanations for consciousness that we have not yet discovered or understood. We should not jump to conclusions based on our intuitions or preferences.

Therefore, I conclude that the statement is not a valid argument for the existence of a soul or its superiority over a naturalistic account of consciousness.

I think he would endorse most of this except for the nonsense about death being just another transformation.

Desire for the Butlerian Djihad has increased

The most tiring part of this argument is that it seems utterly impossible for materialists to understand that denying them the legitimacy to assert their own magical theory of matter being alive for no explained reason doesn't imply at all adherence to other explanations of the same phenomenon.

I stated out saying that consciousness is mysterious and all theories of it including materialism are essentially equivalent in how untested they are, and I won't budge until evidence comes out that would alter this state.

Because skepticism is, axiomatically, the only reasonable approach to things one doesn't understand.

"matter being alive for no explained reason"

I don't understand this. Biology and paleontology are entire fields that are in no small part about discovering these reasons; I'd say we have a pretty good grasp now on the whys. What step is unclear to you?

You can't just say this and in literally your other reply to me admit that biologists don't even understand how cells work, let alone how the brain does.

If you think we have a solid idea of how and why cognition happens I encourage you to talk to pretty much any neurology researcher, as I did.

More comments

The fact that death is permanent is very weird for instance and it seems much more parsimonious to say the link between the body and the soul has been severed than that the extremely complex computer has been broken in a subtle way that can't be repaired.

On the other hand, obviously material things like strokes, lobotomies, head injuries, and drugs appear to have an obvious effect on the qualia of an individual. Why does death - which materially seems to just be an extreme extension of brain injury - suddenly now need an ineffable soul? Or do clots in your brain, or a metal rod through your head, claw at your spirit as well?

Put it another way, we can’t fix the brain once it’s dead right now, as you say, because it is Too Complex and Can’t Be Repaired. Would being able to fix death, in your eyes, be good evidence for the material basis of “consciousness”?

We also often can’t fix computers (or other complex machines) without replacing parts once some parts have degraded enough. Is that not dissimilar to how we cannot fix the brain (except that we can replace parts for other things and less so for a brain)?

Now they do of course, but in no ways that are predicted by the laws we understand.

I don't understand this. Everything the body does is hard to predict by the laws we understand. We don't understand consciousness, sure, but we also don't (fully) understand cell biology, DNA assembly, protein folding etc. either, and nobody is suggesting those require new forces or laws.

the extremely complex computer has been broken in a subtle way that can't be repaired.

How would this not also apply to death of the body? It seems to me postulating a separate soul does not meaningfully reduce complexity here. Most deaths are not a failure of the brain.

If consciousness was simply a property of certain arrangements of matter, you wouldn't really expect nature to select the ones that can be bricked.

Sure, but that's not a "death" thing. Once you know that organisms stop being able to procreate at a certain age, it seems necessary that they will die after, as nothing would select against it. The weird thing here is menopause, not death.

I believe the most honest thing to do here is to be humble and admit that we don't know how consciousness works

Sure, but we can place constraints well before we have operational understanding. Few people know how almost anything works; again, they don't see a need to postulate novel physics.

Anyways, I don't understand either why you see the need to add entities, nor what adding entities even gives you. What is the mind doing that physics clearly does not suffice for?

what is the mind doing they physics clearly doesn't suffice for

Qualia.

There's no explained reason for me to be experiencing existence.

There's no explained reason for lots of things that we don't invoke the need for new physics for. What makes qualia unique?

I think this is gesturing at the common philosophical stance "I see no way that materialism could even in theory give rise to qualia". That of course has the problem that it's equally difficult to see how any set of laws would give rise to qualia; as such, it's just hiding the confusion of qualia outside of physics.

Well I don't know I can imagine a few possible set of laws that could actually do that.

And I don't see no way for materialism to be true at all, it's quite possible that it is. I just don't pretend it's more likely than other speculative theories when we're bereft of evidence.

More comments

can explain how and why the unspecified biochemical reactions produce consciousness, agency, though or qualia

Slime molds have agency. Even extremely simple organisms without any neurons are capable of solving problems they evolved to solve,, such as efficiently gobbling up nutrients and whatnot.

I will never not point out that this is materialist mythology supported by nothing

Materialism itself allows immaterial things.

Information doesn't care what sort of matter it's encoded in.

Obviously atoms are the only thing that can encode information, but without said information, they're just a useless chaotic mess, so..

I don't get what exactly do people need non-materialist philosophies for, when materialism itself pretty much allows everything you may want, especially if you add in god-like entities to whom time is just another dimension. What would you need immaterial souls for if you can just take a peek at the arrangement of the atoms of the brain of anything at any time during its lifetime ?

Materialism itself allows immaterial things.

Information doesn't care what sort of matter it's encoded in.

That's not Materialism, that would be Realism. Which I agree with. Everything is indeed contained in Reality.

(As an aside: this text is confusing, you jump from psychologizing to chess to Moravec, and it's hard to keep track of the core assertion).

Hlynka do you really want to go another round. Because I don't particularly care for it, especially not enough to respond with pure object-level to your Bulverism – even though this is what we do here all the time, to the point of obsessiveness. How boring is your life exactly? Maybe go on a trip to ol' Europe (or Asia), come here and we talk it over. Bring some good liquor if you can, Turks are lousy in this regard.

The real inferential distance here seems to come from inferences made on the basis of evidence, versus whatever you're trusting. Say, Caplan isn't known for being impressed by GPT's chess skills – he tests it on exam questions he believes are tricky. You tried that too: last time you've been claiming that autoregressive LLMs cannot not hallucinate your daughter's name, due to how they're trained. I've shown that ChatGPT replies as well as you said a human would, admitting it doesn't know. Since then, it's become possible to have coherent dialogues with files below 4 Gb and very sensible ones with stuff like Vicuna-13B. I assume you haven't tested that yet, despite insistence of multiple people here, because I haven't seen you concede the issue. Now you're simply dropping that and pivoting to saying they can't play chess, again due to fundamentals of their training. It's «just» regression, «just» predicting words, see. And words aren't chess moves or facts, so of coursh' flawlessly modeling grammar ought to be unsurprising, trivially feasible for an LLM – unlike modeling the logic of the game board. Or something. Although Chomsky, another guy who does not check whether his punches land, still seems to think that grammar cannot be solved with «just» statistics either. And while we're at it, Minsky's arguments were also obsolete at release. Now Dreyfus, this Heidegger scholar, was largely correct about Minsky's Talmudic-symbolic approach, but of him, Minsky only had to say that he doesn't understand and should be ignored.

On a meta-level, your rhetorical similarity to all those eggheaded paper-pushers is a rather bigger indictment of your position than whatever you say specifically about the tech. You scoff at pure math guys, at ivory tower symbol manipulators, but your arguments here are: brandishing your degree, discussing the history of academic debate, a bit of homegrown epistemology, dismissive blogposts and nerdy web comics, throwing around applause lights and rat lingo. You do not apply the pragmatic and entrepreneurial American lens, the «does it work tho» ethos. You treat LLM enthusiasts (and by proxy, developers who say the same) with the sort of casual disdain you believe pure math bros have for signal-processing researchers; where do you think notions like gradient, dropout and channel came from to LLMs? Read about Hyena Filters some time to stay ahead of the curve.

As a man involved with engineering, you ought to know that a skilled engineer can make bits and bytes perform bizarre magical circus tricks a limp-wristed intellectual would not see coming, and on the other hand that some problems are vastly harder than a sedentary utopian imagines, for not everything is deducible from first principles; that processes can be very deep and counterintuitive, that it can take a lifetime to figure out the nitty-gritty of how something actually works, so it is sensible to defer to reality over theory and assumption; worse, you preach this attitude. But you do not practice what you preach. Are you really a thing-manipulator or a symbol-manipulator? Or maybe more of a people-manipulator, at this stage of your career?

You are wrestling with your own shadow.

Congrats on your nine-year old never making illegal moves, by the way. You teach them well. Recently I've learned that my gainfully employed backend dev friend, 32, doesn't know how castling works, and is uncertain about pawn's inability to attack straight. I'd say he should be able to get to 1600 ELO, at least, with a little bit of finetuning. It's an issue of knowledge and experience, not just ineffable innate properties of the mind.

Do you have enough experience with LLMs to justify your conclusions?

I'll cite again Piantadosi again.

Frederick Jelinek’s quip “Every time I fire a linguist, the performance of the speech recognizer goes up” (Jelinek 1988) was a joke among linguists and computer scientists for decades. I’ve even seen it celebrated by academic linguists who think it elevates their abstract enterprise over and above the dirty details of implementation and engineering. But, while generative syntacticians insulated themselves from engineering, empirical tests, and formal comparisons, engineering took over. And now, engineering has solved the very problems the field has fixated on—or is about to very soon. The unmatched success of an approach based on probability, internalization of constructions in corpora, gradient methods, and neural networks is, in the end, a humiliation for everyone who has spent decades deriding these tools.

But now we can do better.

Guys, I think I’ve found the source of the Adderall shortage.


You’re very attached to this idea that GPT can’t model systems, that it has a gestalt of “things a human would do” and pulls out the most common one, conditioned on its current situation. @DaseindustriesLtd has some good reasons why that may not be the case. But you’re missing the forest for the trees.

What does AI need in order to be dangerous?

It gets speed for free. It gets durability, or at least redundancy, so long as anyone can instantiate it. It can capably mimic a human in familiar situations, and because lots of the training data includes humans encountering unfamiliar situations, it can ape our behavior there, too. Does it really matter if it we meatbags can extract an internal conceptual model?

No, the bottleneck is I/O. Hook up current AI to a speech parser, who cares. Hook it up to the early-warning radar and you start to add risk. Hang the global financial system on it, and there will be trouble. We make tools because they are useful to humans. That will also make them useful to an AI which credibly imitates a human.

Sophistry over whether the AI really “knows” what it’s doing will be cold comfort indeed.

What does AI need in order to be dangerous?

"Dangerous" in the sense that aviation or being out on open water is dangerous? Or dangerous in the sense that rationalist usually mean when they talk about the so-called "AI alignment problem"?

Yes.

In the case of the former, nothing. Not only are we already there, We've been here for a century. Computer glitches and poorly designed/understood control functions have been getting people killed as long as there have been computers.

In the case of the latter, agency, and the physical/perceptual capabilities of a kindergartener strike me as the bare minimum.

FYI you linked my Taibbi/Twitter post, prosecutorial immunity is here

Fixed.

GPT-4 can't even see the board!

I could not play chess, point blank, if I could not see the board.