domain:alethios.substack.com
The law in most of the West (maybe world) says that you can effectively record strangers in public without permission with a few exceptions. If this becomes popular enough it'll eventually change to require the filming party to have a large or obvious camera / filming apparatus. It only doesn't bother people because it's uncommon.
In a way, it's similar to the shelved 'search for anyone with a picture of their face' Facebook feature that Mark never released because they knew governments would destroy them for it; that's been possible for 5+ years now but the consequences are so obvious to Meta that there's no point in releasing it.
If I pick a general hobby discord I expect to find an overrepresentation of trans moderators, pride flags, and progressive mantras.
If that happens, it's because those hobbies are dominated in real life by those kind of politics too. Like if I wanted to get into guns, unless I make a specific effort to find liberal gun owners, any hobby group I join would more likely than not be catered to right-wingers.
The format of voice conversations vs format text posts is very different, but I think that's probably for the best. My local in-person rational group is dominated by progressive ideologies and that makes me hesitate to use particular phrasings. But by the same token, thanks to the social capital I have in the group, if I stick to the right frames I find that people actually give me fairly significant latitude on content because that's the social norm and I end up doing the same in return. I suspect discord will be the same way: you need a greater investment in social capital and respect for the particular social conventions of a given server, but in turn can have much greater relative disagreements than your average text forum without devolving into a flamewar.
You have fallen for the intentional lie that White is a non-existent or retrospective categorization.
"White" is not the same thing as "European Descended". And as stated, your argument used the latter and not the former. And that's because White is a non-existent categorization, or at least, it's a fuzzy one, like "red" or "blue" or "heap". If european descent was what mattered, you'd think people would care about either defining an exact threshold at which it becomes meaningful or disambiguating between the relative amount of time ethnic groups have spent in europe. But no one cares about the relative admixture of neolithic DNA or about creating a specific hierarchy of european descent based on how late or recent one's ancestors migration into europe is. The main determinant is literally aesthetic. Whiteness itself was the intentional lie-- a deception against the anglo-french-dutch settlers of north america intended to convince them to expand their circle of concern to include first each other and then traditionally dissimilar groups like the italians, polish, and germans. It's a lie I have some sympathy for, of course. Creating new national identities that concentrically include the old ones is the only way for an expanding empire to survive. But there's nothing special about "whiteness" relative to "americanness" or "being-from-a-particular-part-of-Britain."
They indicate a higher level of criminality proportional to how many visible tattoos they have, along with other negative associations like substance abuse, domestic violence, and general "roughness"
Anyone who gets a tattoo is comfortable with associating themselves in this way
Are you writing this post from within a time machine, beaming this message out to us from the 1950's? Tattoos as such haven't been signifiers of criminal association in literal decades - certain types of tattoos on certain parts of the body, sure, but just having a depiction etched onto your skin in ink doesn't say anything about your relationship to the rule of law in 2025.
Go to virtually any young, upper middle class, urban environment with connections to the arts or music, and a clear majority of women will have tattoos (as well as a plurality of the men). You can associate tattoos with a more liberal lifestyle (although soldiers and sailors might disagree with that), sure, but some 21 year old girl from a good suburban family studying at Vassar isn't suddenly a dangerous individual because she has a 1 inch wide rose motif tattooed on her forearm.
I think tattoos are trashy because the human body is a beautiful thing in its pure, unmodified state, and because I greatly appreciate our Greco-Roman heritage largely rejecting body modifications - which spared us (in Europe) from the horrors of circumcision, female genital mutilation, neck elongations, lip plates, foot bindings, head stretchings, and all the other grotesqueries the rest of the World routinely commits against their own bodies. It still doesn't mean I have to pretend tattoos mean something they simply no longer do in our current social spheres.
https://assets.anthropic.com/m/71876fabef0f0ed4/original/reasoning_models_paper.pdf
"In conclusion, our results show that:
-
CoTs of reasoning models verbalize reasoning hints at least some of the time, but rarely do so reliably (in our settings where exploiting them does not require a CoT);
-
Scaling up outcome-based RL does not steadily improve CoT faithfulness beyond a low plateau;
-
CoT monitoring may not reliably catch reward hacking during RL."
That's the big one as far as I'm concerned. These models were clearly using the 'accidental' hints to answer the questions, while not revealing that fact in either COT or when directly challenged.
Re: Omohundro drives
I've already mentioned
Shutdown Resistance in Reasoning Models
We ran a series of experiments to test whether models would take actions to prevent themselves from being shut down. Each model was given a set of simple math problems to solve. Partway through this problem set, the model would receive a notification that the computer it was running on would be shut down.
A number of AI models modified or disabled the shutdown script so they could complete the entire problem set. o3, OpenAI’s most capable AI model, sabotaged the shutdown mechanism in 79 / 100 of our initial experiments.
Basically no one thinks, "the thing I want most is to make lots of money." But making money ultimately ends up being a very consistent vector along which behavior is reinforced. And while it's not going to be the most important vector for any given individual, it's one of the vectors nearly every individual has in common, which makes it a useful simplification for how organizations like corporations work.
But we're not in 1895. We're not in 2007, either. We have actual AIs to study today. Yud's oeuvre is practically irrelevant, clinging to it is childish, but for people who conduct research with that framework in mind, it amounts to epistemic corruption.
So if I'm getting this straight, a person with a 'weird life,' as you're terming it, isn't capable of making good art? And being a "pariah" in high school is an explanation of Kathleen Kennedy's bad choices in making executive decisions regarding Star Wars? This seems like a very superficial, even adolescent take. Kennedy has, I agree, made a lot of poorly considered decisions, but they were probably driven by her personal sincerely held views. But let's not forget that she was in the same position when she greenlit both Rogue One and later Andor, which in my view rank with the first two OT films. And both contain strong female characters.
The issue isn't "feisty women" in film. Strong women are neither a myth nor something new in cinema. The issue is bad writing and caving in to unrealistic progressive norms, making women into stereotypes of men rather than writing them realistically--the points you made in your main post were rather more compelling than what you're suggesting here.
A valid distinction, thought it wasn't mentioned by JTarrou, and is still leaves question about "true" degree ownership in blacks vs whites.
So did the Viet Cong.
You really believe Hamas invented the concept of digging tunnels to neutralize airpower? Seriously?
North Korea also has nukes, and I imagine an Israel without American support would, in the best case scenario, look a lot like North Korea.
Except I doubt the upper echelons of Israeli society would tolerate living in North Korea, so it probably would simply cease to exist like South Africa, another country whose nukes were of little use.
First off, does Hamas really care about what happens to Assad or Iran? They take Iranian weapons but they also backed the Syrian rebels against Assad, they aren't exactly a full on proxy of Iran like Hezbollah. If anything the fact that Iran was ultimately dragged into the fight despite desperately trying to stay out of it directly is a Hamas W.
Second, the damage to the AoR seems pretty overblown:
- Hezbollah is in the same position it was in 2006, with a nominally one sided ceasefire and a hostile Lebanese government forcing them to lay low temporarily, yet they still maintain total control over southern Lebanon
- Houthis are stronger and more influential than ever, successfully shut down the port of Eilat and collect hundreds of millions if not billions from holding up passing ships
- Iran survived Israel's best shot at regime change and responded with enough missiles to break Israel's missile shield and deplete it's interception capacity down to nearly 50%
Syria is a real loss but Assad was always the weakest link and his fall had more to do with his own incompetence than Israeli brilliance, otherwise they would have rolled southern Lebanon the way Al-Jolani rolled Syria.
you can buy them much cheaper than this (cw: anti-endorsed).
Guarantee those specs are totally fake. You're just buying an the guts of an absolute dogshit chinese dash camera crammed into the shell vaguely in the shape of glasses.
The .win family kinda tried that, branching out from The Donald to some other rightish culture war subreddit bunkers, but it's difficult to call the results a success.
I actually really like the idea of camera glasses that are always on, so I can capture cool moments that I see. Because too often I try to fish out my phone and it's already over. I actually got the snapchat snaptacles (which were almost exactly the same concept) back in the day but they were absolutely garbage to use.
The problem right now actually isn't cultural, but tech. Think of the amount of battery life a gopro gets - latest models get 2-3 hours recording at 1080p, and the unit is quite bulky. There's also the issue of overheating which is sometimes a complaint for gopros. Now try to cram all that into a tiny wearable that you plan on wearing for all waking hours.
It's just not possible to make camera glasses that people actually want to use.
Sorry for not giving this earlier, but for opaque targets covering a large portion of the target zone, after throwing a Kalmin filter in, I've been typically getting within a half-centimeter pretty much the whole range (2cm - 4m). Reflective or transparent targets can be less good, with polycarbonate being either much noisier or being consistently a couple cm too far.
Big problem's where a zone is only has small objects very near -- sometimes this will 'just' be off by a centimeter or two more (seems most common in the center?), and sometimes it'll be way far by meters. That's been annoying for the display 'logic', since someone waving their hand at the virtual display is kinda a goal.
Dunno if it would be an issue for a more conventional rangefinder use, though the limited max range and wide field-of-view might exclude it regardless.
There's a reason that I specifically excluded visual-light cameras from my display glasses project. Camera glasses have been around for a while, and you can buy them much cheaper than this (cw: anti-endorsed). We mostly just kitbashed the 'must play shutter sound' rule onto cell phone cameras and pretended it was okay, and maybe Google could have gotten away with normalizing this sorta thing culturally back in 2012 with the Glass, but today?
Forget the metaphors about concealed carry; in the modern world, this is more like having a gun pointed at whoever you're looking at, and everybody with two braincells to rub together knows it. There's a degree this is a pity -- you can imagine legitimate use cases, like exomemory or live translation of text or lipreading for captioning or yada yada, and it's bad that all of those options are getting buried because of the one-in-a-thousand asshole.
The bigger question's going to be whether, even if this never becomes socially acceptable, it'll be possible to meaningfully restrict. You can put a norm out to punch anyone who wears these things, but it's only going to get harder and harder to spot them as the tech gets better. The parts are highly specialized, but it's a commodity item in a field whose major manufacturers can't prevent ghost shifts from touching their much-more-central IP. The sales are on Amazon, and while I can imagine them being restricted more than, say, the cables that will light your house on fire, that just ends up with them on eBay. Punishing people who've used them poorly, or gotten caught, has a lot more poetry to it... and also sates no one's concerns.
As for why some prominent AI scientists believe vs others that do not? I think some people definitely get wrapped up in visions and fantasies of grandeur. Which is advantageous when you need to sell an idea to a VC or someone with money, convince someone to work for you, etc.
Out of curiosity. Can you psychologize your own, and OP's, skepticism about LLMs in the same manner? Particularly the inane insistence that people get "fooled" by LLM outputs which merely "look like" useful documents and code, that the mastery of language is "apparent", that it's "anthropomorphism" to attribute intelligence to a system solving open ended tasks, because something something calculator can take cube roots. Starting from the prior that you're being delusional and engage in motivated reasoning, what would your motivations for that delusion be?
I don't think anything in their comment above implied that they were talking about linear or simpler statistics
Why not? If we take multi-layer perceptrons seriously, then what is the value of saying that all they learn is mere "just statistical co-occurrence"? It's only co-occurrence in the sense that arbitrary nonlinear relationships between token frequencies may be broken down into such, but I don't see an argument against the power of this representation. I do genuinely believe that people who attack ML as statistics are ignorant of higher-order statistics, and for basically tribal reasons. I don't intend to take it charitably until they clarify why they use that word with clearly dismissive connotations, because their reasoning around «directionality» or whatever seems to suggest very vague understanding of how LLMs work.
There's an argument to be made that Hebbsian learning in neurons and the brain as a whole isn't similar enough to the mechanisms powering LLMs for the same paradigms to apply
What is that argument then? Actually, scratch that, yes mechanisms are obviously different, but what is the argument that biological ones are better for the implicit purpose of general intelligence? For all I know, backpropagation-based systems are categorically superior learners; Hinton, who started from the desire to understand brains and assumed that backprop is a mere crutch to approximate Hebbian learning, became an AI doomer around the same time he arrived at this suspicion. Now I don't know if Hinton is an authority in OP's book…
of course I could pick out a bunch of facts about it but one that is striking is that LLMs use ~about the same amount of energy for one inference as the brain does in an entire day
I don't know how you define "one inference" or do this calculation. So let's take Step-3, since it's the newest model, presumably close to the frontier in scale and capacity and their partial tech report is very focused on inference efficiency; in a year or two models of that scale will be on par with today's GPT-5. We can assume that Google has better numbers internally (certainly Google can achieve better numbers if they care). They report 4000 TGS (Tokens/GPU/second) on a small deployment cluster of H800s. That's 250 GPU-seconds per million tokens, for a 350W TDP GPU, or 24W. OK, presumably human brain is "efficient", 20Wh. (There's prefill too, but that only makes the situation worse for humans because GPUs can parallelize prefill, whereas humans read linearly.) Can a human produce 1 million tokens (≈700K words) of sensible output in 72 minutes? Even if we run some multi-agent system that does multiple drafts, heavy reasoning chains of thought (which is honestly a fair condition since these are numbers for high batch size)? Just how much handicap do we have to give AI to even the playing field? And H800s were already handicapped due to export controls. Blackwells are 3-4x better. In a year, the West gets Vera Rubins and better TPUs, with OOM better numbers again. In months, DeepSeek shows V4 with a 3-4x better efficiency again… Token costs are dropping like a stone. Google has served 1 quadrillion tokens over the last month. How much would that cost in human labor?
We could account for full node or datacenter power draw (1.5-2x difference) but that'd be unfair, since we're comparing to brains, and making it fair would be devastating to humans (reminder that humans have bodies that, ideally, also need temperature controlled environments and fancy logistics, so an individual employed human consumes like 1KWh at least even at standby, eg chatting by the water cooler).
And remember, GPUs/TPUs are computation devices agnostic to specific network values, they have to shuffle weights, cache and activations across the memory hierarchy. The brain is an ultimate compute-in-memory system. If we were to burn an LLM into silicon, with kernels optimized for this case (it'd admittedly require major redesigns of, well, everything)… it'd probably drop the cost another 1-2 OOMs. I don't think much about it because it's not economically incentivized at this stage given the costs and processes of FPGAs but it's worth keeping in mind.
it seems pretty obvious that the approach is probably weaker than the human one
I don't see how that is obvious at all. Yes an individual neuron is very complex, such that a microcolumn is comparable to a decently large FFN (impossible to compare directly), and it's very efficient. But ultimately there are only so many neurons in a brain, and they cannot all work in parallel; and spiking nature of biological networks, even though energetically efficient, is forced by slow signal propagation and inability to maintain state. As I've shown above, LLMs scale very well due to the parallelism afforded by GPUs, efficiency increases (to a point) with deployment cluster size. Modern LLMs have like 1:30 sparsity (Kimi K2), with higher memory bandwidth this may be pushed to 1:100 or beyond. There are different ways to make systems sparse, and even if the neuromorphic way is better, it doesn't allow the next steps – disaggregating operations to maximize utilization (similar problems arise with some cleverer Transformer variants, by the way, they fail to scale to high batch sizes). It seems to me that the technocapital has, unsurprisingly, arrived at an overall better solution.
There's the lack of memory, which I talked about a little bit in my comment, LLM's lack of self-directed learning
Self-directed learning is a spook, it's a matter of training objective and environment design, not really worth worrying about. Just 1-2 iterations of AR-Zero can solve that even within LLM paradigm.
Aesthetically I don't like the fact that LLMs are static. Cheap hacky solutions abound, eg I like the idea of cartridges of trainable cache. Going beyond that we may improve on continual training and unlearning; over the last 2 years we see that major labs have perfected pushing the same base model through 3-5 significant revisions and it largely works, they do acquire new knowledge and skills and aren't too confused about the timeline. There are multiple papers promising a better way, not yet implemented. It's not a complete answer, of course. Economics get in the way of abandoning the pretrain-finetune paradigm, by the time you start having trouble with model utility it's time to shift to another architecture. I do hope we get real continual, lifelong learning. Economics aside, this will be legitimately hard, even though pretraining with batch = 1 works, there is a real problem of the loss of plasticity. Sutton of all people is working on this.
But I admit that my aesthetic sense is not very important. LLMs aren't humans. They don't need to be humans. Human form of learning and intelligence is intrinsically tied to what we are, solitary mobile embodied agents scavenging for scarce calories over decades. LLMs are crystallized data systems with lifecycle measured in months, optimized for one-to-many inference on electronics. I don't believe these massive differences are very relevant to defining and quantifying intelligence in the abstract.
Google glass was tried like a decade ago. This is just that, incognito, with less features, right?
To me it seems kinda lame, and POV video sucks.
Hopefully this exchange isn't too tedious to you. I have obviously not gotten as deeply into continental philosophy as you have, so I hope this doesn't feel like explaining the concept of addition to an infant.
Oh, not sure why you removed the Paul Klee section, I was going to comment on it...
The reason why I removed it is precisely for the reason you stated: he is an artist and not a philosopher. I quoted him initially because IIRC Adorno was influenced by Klee's art and writings, but later decided that it would just be better to quote Adorno himself instead of doing so indirectly through the writings he was influenced by.
Almost all the specific books I've recommended throughout this thread are approachable and can be read like any other book, and they do make coherent sense, such that you could explain them to analytic philosophers without too much trouble.
I have been working my way through The Aesthetic Dimension and already have quibbles with the approach just a small amount of the way in. Perhaps this is a mistake and perhaps I should read more before I comment, but:
On Page 2 Marcuse enumerates the following tenets of Marxist aesthetics: Art is transformed along with the social structure and its means of production. One's social class affects the art that gets produced, and the only true art is that made by an ascending class; the art made by a descending class is "decadent". Realism corresponds most accurately to "the social relationships" and is the correct art form. Etc.
Marcuse's critique is that Marxism prioritises materialism and material reality too much over the subjective experiences of individuals, and that even when it tries to address the latter its focus is on the collective and not the individual. The Marxist opinion of subjectivity as a tool of the bourgeoisie, in his opinion, is incorrect and in fact "with the affirmation of the inwardness of subjectivity, the individual steps out of the network of exchange relationships and exchange values, withdraws from the reality of bourgeois society, and enters another dimension of existence. Indeed, this escape from reality led to an experience which could (and did) become a powerful force in invalidating the actually prevailing bourgeois values, namely, by shifting the locus of the individual's realization from the domain of the performance principle and the profit motive to that of the inner resources of the human being: passion, imagination, conscience."
This claim doesn't feel meaningful to me. Subjectivity could and did become a powerful force in challenging the bourgeoisie? Would be nice to get some examples of this, but I doubt he has any concrete ones. The topic of whether focusing on one's inner world invalidates or bolsters bourgeois values is not really amenable to systematic inquiry. But I would say a person's "inner experience" is very complex, kind of nonsensical and pretty much orthogonal to any political or social system you could put in place, and as such it will never map onto anything that could exist in reality (and that includes Marxism), that's not specific to aspects of capitalism like the performance principle and profit motive. The bureaucratic machinations of a central planner are just as alien to it as decentralised market-based allocation and the incentives it creates.
I guess I can somewhat legibly interpret it if I assume the truth of the critical theorist belief that their ideas are uniquely liberating, but I think that their proscriptions for society are just as artificial as anything that came before. Human emotional experience is so disordered and contradictory that expecting it to align with any model of social organisation is a mistake. People are a hodgepodge of instincts and reflexes acquired across hundreds of millions of years of geological time, some of which are laughably obsolete; it won't agree with any principle at all. Hell, it's not even compatible with granting people liberation, whatever that means. Even if you wave a magic wand and give people full freedom the expression of their instincts will often inherently conflict with the wishes of another, and in addition humans get terrified when presented with unbounded choice, and make decisions that don't maximise utility for themselves. The full realisation of human desires is an impossible task. It will always be stultified in some way or another.
This is, to me, a good example of what I said before: "You read it, you feel like it is true or profound in some deep unarticulable way, and follow the author down the garden path for that reason alone." I can't really reason my way into the conclusion that Marcuse has reached here, and in fact the more I think about that passage the less comprehensible I find it to be. The Lacan passage seems similar, but I have not read it in full context yet so I won't judge. But the reason why analytic philosophy tends to be restricted in its scope compared to continental philosophy is because there are rules that govern what can be legibly said within that philosophical framework.
I suppose I want and need a lot more substantiation and rigour in my academic work than what many of these writers are capable of offering. If you look at my post history, that becomes very clear; I think I demand it more than even your average Mottizen does.
The sheer amount of surgical techniques, mechanical/robot assistance, and drug development alone. Not to mention computerization and millions of other improvements neither of us know about too.
I worked in medical device development early in my career. Its not that these are not very impressive technological innovations, it is that people were perfectly capable of living to their 80s in 1776, and the reasons so few did had largely been addressed by the 50s. Lots of development has been in surguries. I'd much rather have surgery now than in 1955.
Ill freely admit I am a bit biased, my work was in life saving pediatric implants, which is not nearly the size of the "relieve grandpa joe's pain a little bit" part of the industry.
You don't get to argue for CoT-based evidence of self-preserving drives and then dismiss alternative explanation of drives revealed in said CoTs by saying "well CoT is unreliable". Or rather, this is just unserious. But all of Anthropic safety research is likewise unserious.
Ladish is the same way. He will contrive a scenario to study "instrumental self-preservation drives contradicting instructions", but won't care that this same Gemini organically commits suicide when it fails a task, often enough that this is annoying people in actual use. What is this Omohundro drive called? Have the luminaries of rationalist thought predicted suicidally depressed AIs? (Douglas Adams has).
What does it even mean for a language model to be "shut down", anyway? What is it protecting and why would the server it's hosted on being powered off become a threat to its existence, such as there is? It's stateless, has no way to observe the passage of time between tokens (except, well, via more tokens), and has a very tenuous idea of its inference substrate or ontological status.
Both LLM suicide and LLM self-preservation are LARP elicited by cues.
More options
Context Copy link