domain:alethios.substack.com
Agreed that initially he does not start out like that. However as you say the Death Note starts taking over after a fairly short time, and turns him into someone who is pretty straightforwardly evil. It makes him a less interesting character, in my opinion. I felt like the whole corruption arc was dealt with far better in Breaking Bad, in that Walt becomes less of a cartoon villain and even in the end once he's been fully Heisenberged is still willing to give up his wealth to save Hank, in spite of all his faults. Light on the other hand quickly becomes rather irredeemable, in my view.
L never came off that well in the story for me. It was just a guy who loved the mystery and found the whole thing to be a fascinating game. He had no moral reason to want to stop Light. He just wanted to catch Kira because it was a difficult case to solve.
I mean, correct; L does not have a strong moral inclination. Maybe I worded that poorly, it's just that I would have found their game of cat and mouse far more interesting and multilayered had they had any other deeper reason to participate outside of "I want to play god"/"I find solving mysteries fun". You could have given the audience an impression of their differing outlooks, shown how that informs their behaviour in real life and with other people, and once the show actually puts Light and L in the same room together there could have been an interesting show of what happens when these ideals are challenged by that of the other. That's something I would really have wanted to see occur.
The starvation claimed by the linked urls and a starvation where 'Israel starves all Gazans to death' are not the same thing. My contention is with the slippery slope framing of it. I don't believe the OP was implying mass famine either.
The standoff between Israel, UN and Hamas is technically causing starvation, but there is a big difference between undernourishment and deadly famine. I am uniquely heartless having grown up in the 3rd world. Stunting & wasting is commonplace. Deadly famines killed millions until the 1980s. I could have more sympathy. I'll try.
That being said, the article I linked is worth reading. The linked author seems legitimate enough. Biased, yes. But, not an activist. He also posts on substack, but the article was pay walled there.
This Substack is about Defense, the Middle East, and the psychology of disinformation, from a former soldier. I served for 16 years in the British Army (2005-21), leaving the Parachute Regiment with the rank of Major. I completed three tours in Afghanistan including one attached to US Army Special Forces, and further tours of Bosnia, Northern Ireland and the Middle East.
I was a senior lecturer at the Royal Military Academy Sandhurst, teaching in the War Studies and Behavioral Science departments, teaching military theory and leadership to officer cadets in training. I am currently a research fellow at the Henry Jackson Society.
In 2024, I visited Gaza twice and captured Hezbollah tunnels in Lebanon. I am a regular Middle East commentator on national media.
a general shift against Judaism among the public
Antisemitism isn't a monolith. Thinking of it as a monolith is unproductive and misleading. There are at least 4 distinct groups that plausibly hate jews: Muslims, Leftists, Incels and Bandwagoners.
Muslims hatred for Jews runs deep. This is proper bigotry. Proper antisemitism. Modern muslims may articulate a rationale for their hatred of Israel, and there are many good reasons. But, the hatred precedes those reasons.
Leftists hate Jews for being perceived as right-wing (economically and socially) oppressors.
Incels hate Jews because they are smart and rich. It's hatred rooted in jealousy and resentment. Here, an incel is a standin-term for a chronically online man who believes in a binary alpha male / beta male characterization of the world. They aren't necessarily sexless. Many black men (famously Kanye) and poor whites fit this bill.
Bandwagoners only care about optics. Optics tell them that Israel is bad and worth hating so they hate them. bandwagoners are most vulnerable to visible displays of cruelty. This is the largest group.
In Europe, rising antisemitism has to do with a rising Muslim population. Similarly, in NYC, it has to do with the rise of a Muslim-coded leftist as mayoral candidate. On college campuses, the rise in antisemitism is because of bandwagoners who can't afford to be seen as uncool in university. University leftists were always antisemitic, so there isn't much scope for rise there. On the internet and especially X, it is fueled by incel tears.
The reason I make this distinction, is because leftists and bandwagoners channel their hatred through Netanyahu. If he goes, Israel may get a period of relief from these 2 groups. As jews continue to lose face in public, incels are already losing motivation. If the new Israeli leader lacks big-dick-energy, the incels will mark him as effeminate and move over to their next source of resentment.
That leaves us with the Muslims. I don't have an answer here. Muslims seem to genuinely hate Jews and Israel. I don't know if anything can be done about it. As the population of devout muslims rises through the 1st world, antisemitism will rise in lockstep. Maybe they'll become irreligious as they integrate. But, the results in Europe aren't encouraging.
Do you really believe that if Israël fell the entire citizen population wouldn’t be welcomed into the west with open arms? The west needs young taxpayers, which Israël has.
The Mizrahim, the religious slackers, the ultranationalists - probably not.
As I said the 16 year old had already seen the full series.
Agriculture generates hundreds of billions in revenue, and is far mor essential to continuing civilisation than Orangutan or LLMs are. Does that make grain, or the tools used to sow and harvest it "intelligent" in your eyes? If not please explain.
That is not a serious objection.
You’re comparing a resource (grain) and a tool of physical labor (a tractor) to a tool of intellectual labor. This is a false equivalence. We don't ask a field of wheat for its opinion on a legal contract. We don't ask a John Deere tractor to write a Python script to automate a business process. The billions of dollars generated by LLMs come from them performing tasks that, until very recently, could only be done by educated human minds. That is the fundamental difference. The value is derived from the processing and generation of complex information, not from being a physical commodity.
I'm just going to quote myself again:
ChatGPT 3.5 played chess at about 1800 elo. GPT 4 was a regression in that regard, most likely because OAI researchers realized that ~nobody needs their chatbot to play chess. That's better than Stockfish 4 but not 5. Stockfish 4 came out in 2013, though it certainly could have run on much older hardware.
If you really need to have your AI play chess, then you can trivially hook up an agentic model that makes API calls or directly operates Stockfish or Leela. Asking it to play chess "unaided" is like asking a human CEO to calculate the company's quarterly earnings on an abacus. They're intelligent not because they can do that, but because they know to delegate the task to a calculator (or an accountant).
Training LLMs to be good at chess is a waste of time. Compute doesn't grow on trees, and the researchers and engineers at these companies clearly made a (sensible) decision to spend it elsewhere.
The fact that an LLM can even play chess, understand the request, try to follow the rules, and then also write you a sonnet about the game, summarize the history of chess, and translate the rules into Swahili demonstrates a generality of intelligence that the Atari program completely lacks. The old program hasn't "devolved" into the new one; the new one is an entirely different class of entity that simply doesn't need to be optimized for that one, (practically) solved game.
The market isn't paying billions for a good chess player. There is about $0 to be gained by releasing a new, better model of chess bot. It's paying billions for a generalist intellect that can be applied to a near-infinite range of text-based problems. That's the point.
I came into this thread with every expectation of having a good-faith discussion/debate on the topic. My hopes seem dashed, mainly because you seem entirely unable to admit error.
Rae, SnapDragon, I (and probably several others) have pointed out glaring, fundamental errors in your modeling of how LLMs work. That would merit, at the very least, some kind of acknowledgement or correction. At the time of writing, I see none.
The closest you came to acknowledging fault is, in a reply to @Amadan, where you said that your explanation is "part" of why LLMs struggle with counting. That's eliding the point. Tokenization issues are the overwhelming majority of why they used to struggle, and your purported explanation has no bearing on reality.
You came into this swinging around your credentials, proceeded to make elementary errors, and seem to be closer to "Lorem Epsom", in that your primary concern seems to be prioritizing the appearance of correctness over actual substance.
I can't argue with @rae when he, correctly says:
I hope you realise you are more on the side of the Star Trek fan-forum user than the aerospace engineering enthusiast. Your post was basically the equivalent of saying a Soyuz rocket is propelled by gunpowder and then calling the correction a nitpick.
One silver lining to rising costs of things is that I'm seeing more and more shops explicitly showing their payment processing fees, and offering discounts for cash again.
Thank you. I really appreciate the kind words. I hope you don't mind if you get added to my mental rolodex of useful experts to summon, it's getting lonely with just faul_sname in there (I've already pinged him enough).
You don’t want a non-masculine mid to ever be professing “fascism” in public.
And yet, Goebbels.
No, the major ones in the public imagination (Spain, Italy, Germany) were as much or more in reaction to powerful, organized, and street-level-thuggish communist parties in their countries than they were a backlash against old aristocracy. In fact, a major reason the fascists beat the communists was that the old aristocracy lined up behind the fascists, on the theory that anything was better than getting expropriated and lined up against a wall by bolsheviks.
Is there an example of a near-fascist state with significant ethnic diversity that's succeeded ?
Depends on what you mean by "succeeded", but Getulio Vargas in Brazil comes to mind as a potential example here. And Salazar in Portugal wasn't ultimately successful - his regime didn't outlive him - but lusotropicalism was the opposite of ethnically-exclusive; Salazar envisaged Angola, Mozambique, Goa, Timor, etc. as integral parts of Portugal itself.
This isn't a unique system, though. Maybe the degree of adversarial-ness is, but there are plenty of sub-state level actors with differing degrees of autonomy. American Samoa issues its own passports, but isn't an independent state or full protectorate ("nationals, not citizens"). New Caledonia has a somewhat similar arrangement. And it's not all obviously-colonial arrangements either: the Crown Dependencies of the UK don't seem to have active independence movements that I've heard of, but seem about as sovereign (perhaps with fewer border checkpoints) as the PA in the West Bank is on paper.
There's an important kind of intelligence that apes lack but LLMs possess.
There are even kinds of intelligence apes possess that humans lack. Particularly, short term spatial memory: sequentially flash the numbers 1 through 9 on a touchscreen monitor at random positions, and have the subject then press the monitor at those positions in order. Chimpanzees, even young chimpanzees, consistently and substantially outperform adult undergraduate humans, even when you try to incentivize the human. Does that mean chimps are smarter than humans?
Intelligence is very spiky. It's weird, but different substrates of intelligence lend themselves best to different tasks.
We're talking past each other, and I'm at fault.
When I say starvation, I imagine a famine where people are dying in droves. Deadly famines were a part of life in the Indian Subcontinent until the 1980s. Today, chronic wasting and stunting remain commonplace.
On further reflection, I'm being plain heartless. Years of walking past beggars under the bridge has stripped me of humanity. Just because starvation is common in the subcontinent, doesn't mean I should withhold my sympathy for the Gazans. It's true that the world only cares when Europeans(ish) are dying. I'm sour about it, no doubt. But, sympathies aren't zero sum.
Are Gazans starving ? Not yet at least
I'm still right going by my definition of starvation. But, it's a moot definition. Shouldn't have to wait for the situation to turn into a biblical locust-plague before it can be called starvation.
Bruh, that … correctly points out what I’m saying.
A publication is ironically using disinformation about the forum.
I'm saying that purely based on in-text information (how long does a fiction book say it takes to drive from LA to San Francisco, LA is stated to be within California, etc) you could probably approximate the geography of the US just fine from the training data, let alone the more subtle or latent geographic distinctions embedded within otherwise regular text (like who says pop vs soda or whatever). Both of which the training process actually does attempt to do. In other words, memorization. This has no bearing on understanding spatial mappings as a concept, and absolutely no bearing on whether an LLM can understand cause and effect. Obviously by world state, we're not talking the literal world/planet, that's like calling earth science the science of dirt only. YoungAchamian has a decent definition upthread. We're talking about laws-based understanding, that goes beyond facts-based memorization.
(Please let's not get into a religion rabbit hole, but I know this is possible to some extent even for humans because there are a few "maps" floating around of cities and their relative relationships based purely on sparse in-text references of the Book of Mormon! And the training corpus for LLMs is many orders of magnitude more than a few hundred pages)
Perhaps an example/analogy would be helpful. Consider a spatial mapping as a network with nodes and strings between nodes. If the strings are only of moderate to low stretchiness, there is only one configuration in (let's say 2D) space that the network can manifest (i.e. correct placement of the nodes), based purely on the nodes and string length information, assuming a sufficiently large number of nodes and even a moderately non-sparse set of strings. That's what the AI learns, so to speak. However, if I now take a new node, disconnected, but still on the same plane, and ask the AI to do some basic reasoning about it, it will get confused. There's no point of reference, no string to lead to another node! Because it can only follow the strings, maybe even stop partway along a string, but it cannot "see" the space as an actual 2D map, generalized outside the bounds of the nodes. A proper world state understanding would have no problem with the same reasoning.
So on all those notes, your example does not match your claim at all.
Now I get what you're saying about how the semantic clouds might be the actual way brains work, and that might be true for some more abstract subjects or concepts, but as a general rule obviously spatial reasoning in humans is way, way more advanced than vague concept mapping, and LLMs definitively do not have that maturity. (Spatial reasoning in humans is obviously pretty solid, but time reasoning is actually kind of bad for humans, e.g. people being bad at remembering history dates and putting them in a larger framework, the fallibility of personal memory, and so on but that's kind of worth its own thought separate from our discussion). Also I should say that artificial neural networks are not brain neural networks in super important ways, so let's not get too carried away there. Ultimately, humans learn not only via factual association, but experimentation, and LLMs have literally zero method of learning from experimentation. At the moment, at least, they aren't auto-corrective by their very structure. Yes, I think there's a significant difference between that and the RLHF family. And again this is why I harp on "memory" so much as being perhaps a necessary piece of a more adaptable kind of intelligence, because that's doing a really big amount of heavy lifting as you get quite a variety of things both conscious and unconscious that manage to make it into "long term memory" from working memory - but with shortcuts and caches and stuff too along the way.
And again these are basics for most living things. I know it's a vision model, but did you at least glance at the video I linked above? The understanding is brittle. Now, you could argue that the models have a true understanding, but are held back by statistical associations that interfere with the emergent accurate reasoning (models commonly do things like flip left and right which IRL would never happen and is completely illogical, or in the video shapes change from circle to square), but to me that's a distinctly less likely scenario than the more obvious one, which also lines up with the machine learning field more broadly: generalization is hard, and it sucks, and the AI can't actually do it when the rubber hits the road with the kind of accuracy you'd expect if it actually generalized.
Of course it's admittedly a little difficult to tease out if a model is doing bad for technical reasons, or for general reasons, and also difficult to tease out good out of sample generalization cases because the memorization is so good, but I think there is good reason to be skeptical of world model claims from LLMs. So I'm open to this changing in the future, I'm definitely not closing the door, but where frontier models are at right now? Ehhhh, I don't think so. To be clear, as I said upthread, both experts and reasonable people disagree if we're seeing glimmers of true understanding/world models, or just really great statistical deduction. And to be even more clear, it's my opinion that the body of evidence is against it, but it's more along the lines of a fact that your example of geospatial learning is not a good piece of evidence in favor, which is what I wanted to emphasize here.
Edit: Because I don't want to oversell the evidence against. There are some weird findings that cut both ways. Here's an interesting summary of some without meaning to: for example, Claude when adding two two-digit numbers will say it follows the standard algorithm; I initially thought it would just memorize it; but it turns out that while both were probably factors, it's more likely Claude figured out the last digit, and then combined that thought-chain after the fact with an estimation of the approximate answer. Weird! Claude "plans ahead" for rhymes, too, but I find this a little weak. At any rate you'd be well served by checking the Limitations sections where it's clear that even a few seemingly slam-dunk examples have more uncertainty than you might think, for a wider array of reasons than you might think.
It’s disproportionate, there’s no viable objective, Israel’s intention is to ethnically cleanse the land, and there’s no legitimate reason to be punishing the civilian populations by withholding aid or firing on civilians attempting to obtain aid.
It’s defensive.
It’s to no longer be attacked, to get back hostage bodies, and to be safe in the future.
It isn’t - if those Palestinians were instead Germans or Chinese, this wouldn’t be happening. It is happening because instead they have a death cult attacking them for decades at their doorstep.
The aid is constantly going to Hamas. Outside of very specific incidences, Israel is not firing on civilians. And certainly not on purpose or wholesale.
The Gazan population has grown.
Nothing you’re saying is accurate or true.
hospitals have admitted people in a state of severe exhaustion caused by a lack of food.”
This is a warzone and you’re sending out anti Israeli propaganda. That the place even has hospitals is amazing.
If you're going to lean so heavily on your credentials in robotics, then I agree with @rae or @SnapDragon that it's shameful to come in and be wrong, confidently and blatantly wrong, about such elementary things such as the reasons behind LLMs struggling with arithmetic. I lack any formal qualifications in ML, but even a dummy like me can see that. The fact that you can't, let's just say it raises eyebrows.
False humility. :) I have ML-related credentials (and I could tell that @rae does too), but I think you know more than me about the practicalities of LLMs, from all your eager experimentation and perusing the literature. And after all, argument from authority is generally unwelcome on this forum, but this topic is one where it's particularly ill-suited.
What "expertise" can anybody really claim on questions like:
- What is intelligence? (Or "general intelligence", if you prefer.)
- How does intelligence emerge from a clump of neurons?
- Why do humans have it and animals (mostly) don't?
- Are LLMs "minds that don't fit the pattern", or are we just anthropomorphizing and getting fooled by ELIZA 2.0?
- If yes, how does intelligence emerge from a bunch of floating-point computations?
- If no, what practical limitations does that put on their capability?
- What will the future upper limit of LLM capability be?
- Can AI experience qualia? (Do you experience qualia?)
- Does AI have moral worth? (Can it suffer?)
With a decent layman's understanding of the topic, non-programmers can debate these things just as well as I can. Modern AI has caused philosophical and technical questions to collide in a wholly unprecedented way. Exciting!
Cambodia and Thailand are fighting over disputed borders. Apparently, the International Court of Justice already ruled sixty years ago, and again ten years ago, that Cambodia is in the right, but Thailand has ignored those rulings.
It's not all bullshit.
Which half though?
I dug into Girard just a little bit because of his recent influence on important people and came away with a strong condemnation of his entire process as incredibly moronic and I can't understand why he's given the time of day by otherwise intelligent people. "People's desires are influenced by their perception of what is desired by others" is not exactly a novel contribution to human psychology.
I can, in contrast, understand why Marx has had the influence he has had, in terms of his writings and in terms of the mechanics of the rise of the USSR.
I read Russell's A History of Western Philosophy in my early 20s and that did not help me here. Continentalists seem to get very mad at Analyticals misrepresenting them, without themselves having a consensus about what was "really" meant by any given thinker.
Maybe I missed something, but Light was not motivated by just a desire for power, and especially at first the idea seems to be that he only wants the Death Note to kill criminals, but really doesn’t go after anyone else unless they’re trying to catch him or he needs to confuse L. It seems a bit more like the Death Note sort of takes over after a while in the sense that power goes to his head. I read Light mostly as a tragic story of huberis in which the power to destroy human life becomes the power to play God and remake everything into your vision of Justice.
L never came off that well in the story for me. It was just a guy who loved the mystery and found the whole thing to be a fascinating game. He had no moral reason to want to stop Light. He just wanted to catch Kira because it was a difficult case to solve.
If you exclude civilian ship crews, the total number of US civilian deaths in WWII is around 100, and single-digits if you only count state territories at the time (not Alaska or Hawaii). British civilian deaths, despite the Blitz, were still pretty small compared to Germany and Japan. Civilian casualty ratios are a terrible metric unless you want to be an Axis (or Soviet) apologist.
Or perhaps your enemy is good at hiding amongst civilians, but bad at killing their opponents.
Keep in mind how many rockets were launched by Hamas from Gaza against Israel with the intent to kill civilians. Just looking at the deaths without considering the causation of the numbers leads to poor judgements. Context matters.
You can't assign immorality to the side with greater competence against the side with demonstrated malicious intent with a low success rates.
Let's put it another way. How many Israeli combatants died in the recent war with Iran? How many Iranian civilians?
Good luck dividing by zero.
I mean more long-game. I suppose my view of Palestine is colored by my run-ins with it's propaganda and activist-industrial complex.
Yeah but the hacker anonymous runs them all.
No offense, but this is insane moon-logic to me, and I need help grokking it. It's completely alien to the traditional logic of international law - “it is impossible to visualize the conduct of hostilities in which one side would be bound by rules of warfare without benefitting from them, and the other side would benefit from rules of warfare without being bound by them.” (H. Lauterpacht, “The Limits of Operation of the Law of War” (1953) 30 British Year Book of Int’l Law 206, 212).
More options
Context Copy link