site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 111660 results for

domain:questioner.substack.com

A couple of months ago we discussed the cultural legacy of the Playboy mag of all things under an effort post by @FiveHourMarathon. I was reminded of this by a recent lame-ass political scandal in Hungary in which a local/district volunteer coordinator of the main opposition party and apparently a single(?) mom was doxxed by some pro-government journos as a former porner / sex worker. Technically I’m supposed to call her a former porn actress, but the actual level of ‘acting’ that is involved in all of this makes me decide against doing so; supposedly she also appeared in a grand total of one casting video only (by Pierre Woodman) so calling her an actress would be a big stretch either way. Pretty much the only factor fueling this whole thing was that the party leader and MEP was pictured shaking hands with the ‘lady’ during some public events.

What does Playboy have anything to do with this, you might ask? Well, said party leader decided it’d be a swell idea to reverse the accusation of sleaziness and would also be some sort of clever gotcha to point out that a 51-year-old woman who’s a government commissioner and a former ‘Secretary of State for Sports’ (if you’re one of the few female politicians in Eastern Europe, it’s the sort of government position of lesser importance you can ever hope to fulfill, I guess) appeared in a photoshoot in the local edition of Playboy ages ago.

Anyway, I’m aware that culture wars are waged with maximal cynicism, dishonesty and opportunism, and this is a case of culture-warring alright; no need to remind me of that. Still, I found myself asking the rhetorical question: who the heck actually believes that posing for a photoshoot in a completely mainstreamed, slick, high-class magazine which eventually shifted to a women's fashion and lifestyle brand is the cultural/moral/social equivalent of anonymously getting your holes stuffed and swallowing cum/urine on camera for a handful of cash?

Can a robot turn a canvas into a beautiful masterpiece? (Yes)

Can an orangutan? (No)

[...] I'm also going to send out a bat signal for @faul_sname to chime in and correct me if I'm wrong.

This is actually an area of active debate in the field.

Shitpost aside this seems reasonable to me, aside from a few quibbles

  1. RLVR is absolutely not only a year old -- you can trace back the core idea REINFORCE paper from 1992. RL from non-verifiable rewards (e.g. human feedback) is actually the more recent innovation. But the necessary base model capabilities and training loop optimizations and just general know-how and tooling for training a model that speaks English and writes good lean proofs was just not there until quite recently.
  2. How important the static model problem is is very much a subject of active debate, but I come down quite strongly on the side of "it's real and AI agents are going to be badly hobbled until it's solved". An analogy I've found compelling but lost the source on is that current "agentic" AI approaches are like trying to take a kid who has never touched a violin before and give them sufficiently good instructions before they touch the violin that they can play Paganini flawlessly on their first try, and then if they don't succeed on the first try kicking the kid out, refining your instructions, and then bringing in a new kid.

Intelligence is the general-purpose cognitive ability to build accurate models of the world and then use those models to effectively achieve one's goals

I basically endorse this definition, and also I claim current LLM systems have a surprising lack of this particular ability, which they can largely but not entirely compensate for through the use of tools, scaffolding, and a familiarity with the entirety of written human knowledge.

To your point about the analogy of the bird that is "unintelligent" by the good swimmer definition of intelligence, LLMs are not very well adapted to environments that humans navigate effortlessly. I personally think that will remain the case for the foreseeable future, which sounds like good news except that I expect that we will build environments that LLMs are well adapted to, and humans won't be well adapted to those environments, and the math on relative costs does not look super great for the human-favoring environments. Probably. Depends a bit on how hard to replicate hands are.

There's a fair bit of other work (truck driving, security work etc.) that wartime experience also permits in peacetime contexts. However, most of the presumed remittance-sending work would be typical blue-collar labor (plumbers, nurses etc.) that many Ukrainians can do on the basis of that being their job already.

Thank you for providing an elaboration at request. (And that is a sincere thank you. An ! would feel flippant, but the gratitude is meant.)

mitigating the NEET attractor

What does this mean?

I have some doubts about the modal immigrant's desire to become German. Maybe you're right on some psychological level, but that's pretty intangible. Immigrants' ostentatious insistence on their own separation from the natives, on the other hand, is highly visible. And if in doubt, I'll take the more obvious interpretation: They don't want to be German, they just want to benefit off of Germany.

Ping me when you get around to writing that post.

Edit: You wear steelframed high quality eyeglasses in my mind's eye. German-made, of course.

Oakleys, actually, by my optometrist's recommendation. I might have picked German-made ones if I'd had any idea of what my options are, but I'm still new to the whole glasses business.

Your wish shall be granted, given that most things that humans read will be AI slop in the near future, if only because that's just infinitely more economical than having humans write things. Soon enough everyone and their dog will accept LLM-generated text as the default provenance of written communication, just like we accepted that we naturally read digitally-transmitted text messages instead of communicating orally and in person.

Charitably, I'd say OP sacrificed a bit of accuracy to attempt and convey a point.

I would have let it slide, except for the fact that it was followed up by:

Directionally speaking we may be able to determine that "true" is an antonym of "false" by computing their dot product. But this is not the same thing as being able to evaluate whether a statement is true or false. As an example "Mary has 2 children", "Mary has 4 children", and "Mary has 1024 children" may as well be identical statements from the perspective of an LLM. Mary has a number of children. That number is a power of 2. Now if the folks programming the interface layer were clever they might have it do something like estimate the most probable number of children based on the training data, but the number simply can not matter to the LLM the way it might matter to Mary, or to someone trying to figure out how many pizzas they ought to order for the family reunion because the "directionality" of one positive integer isn't all that different from any another. (This is why LLMs have such difficulty counting if you were wondering)

Both claims are wrong, and using the former to justify the latter is confused and incorrect thinking.

You can take em-dashes and other perfectly reasonable typography from my cold dead hands.

The inability to provide a metric for use value makes this moralism, not an economic theory.

You can make similarly sentimental arguments that some things are worth economic inefficiency, hell you can make convincing ones, but that has essentially no predictive power.

The question then is why should one listen to Marxist moralism instead of Christian moralism, even in these specific matters?

Charitably, I'd say OP sacrificed a bit of accuracy to attempt and convey a point.

Yes, but the problem is that OPs 'sacrificed accuracy' level explanation about dot products of word vectors is clearly an explanation of a different architecture, word embedding model such as word2vec, which was all the rage in 2013. Charitably, yes old transformer based LLMs usually had an embedding layer as a pre-processing step to reduce the input dimension (I think the old gpt papers described an embedding layer step and it is mentioned in all the conceptual tutorials). but the killer feature that makes LLMs a massive industry is not the 2010s-tier embeddings (I don't know, do the modern models even have them today?), it is the transformer architecture (multi-head attention, multiple levels of fancy kind of matrix products) where all the billions of parameters go and which has nearly magical capability in next-word-prediction with word context and relationships to produce intelligible text.

I don't think this is anywhere close to true.

FWIW my first thought on seeing this thread was "oh yeah, I guess it has actually been a while since the last major Ukraine discussion here" so I think you might still be on track.

I wish people would stop trying to make tortured analogies like this. The US doesn't have a good comparison in its history to Taiwan, nor to Ukraine, stop trying to force it.

I think national security threat is overselling it a little bit, but it's an extremely potent propaganda weapon. The fact China hasn't weaponized it yet has more to do with their patience for when it matters, than it does some lack of utility.

People love to dismiss the soviet system

You'll find no such love from me. I have a surprising amount of sympathy for economic socialism as an engineer. But that's only translated to a deeper understanding of its failings.

lack of compute power, and excessive focus on military spending made it impossible for the soviet standard of living to keep up with the West.

Market socialists love to say this, but it's wrong. No amount of increase in compute power can solve the Economic Calculation Problem, because it's not inherently about compute power but about how computers can't read minds. And indeed MarSocs have been unable to produce models even today that would solve the problem, despite increases in compute power that give a single man today more than all that was available to nations then. Orthodox Marxists meanwhile well and truly gave up on even trying.

Intensification-90 could not work for the same reasons paper based Gosplan couldn't. Economic exchange requires informational inputs that planners cannot produce outside of very specific circumstances (like war), because the fundamental assumption that economic value is distinct from individual desire, and can thus be computed ahead of time, is wrong.

I feel like the easy explanation for that from within the LTV is that the labor equivalence ratios between different goods aren't calculated correctly.

That's often advanced, yes, but nobody ever actually produces a "correct" formula for the ratios that actually makes any empirical predictions.

Can you explain the Rothbard quote a bit more?

One of Marx's most famous conclusions out of the LTV is that since value is created by labour, profit rates must be lower in capital-intensive industries and higher in labor-intensive industries.

However, it has been observed by Smith and Ricardo (and since) that profit rates tend to equalize across all industries. How is this possible if the profit rates are always higher in some industries?

In posthumously published Volume 3 of Capital, Marx purported to solve this problem.

Instead of actually providing some relationship between the rates of profit, Marx punts and uses prices of production and capital mobility to explain it, thus reinventing an ad-hoc marginalism to solve this particular problem.

Either the labor theory of value explains prices (but then the transformation problem is unsolved) or competition explains prices (but then why do we need the labor theory?).

"may" means "may", it's not an assertion. At least when I've looked at their stuff, they usually are clear about this kind of thing, and language like this is speculative.

Charitably, the great insight of economics that favor redistribution (more broad than Marxism/socialism) is that people who fall below a certain standard of living lose the ability to participate in the net-gain market, and so it's in everyone's favor to help them. Now, it's still under debate if the math works out, but I don't think it inherently doesn't work out! Or, you could say that leftist economics (the capitalist variant kind) had the realization that free markets develop monopolies too easily, so you need a certain level of intervention to stop it (e.g. Walmart using economies of scale to undercut a local supermarket for years on end, driving them out of business, only to raise prices once the competition lessens).

I'm going to guess neither experiment is going to end well here. (I have, for what it's worth, seen a couple of "Adolf Hitler World Tour 1939-1945" shirts around, but have never seen a Stalin shirt.)

Even if we assume that the response would be considerably slanted towards the Hitler shirt getting the worse reception, isn't that quite nuts as a standard? The argument is that "Staling gets a pass", and if the standard of comparison for "getting a pass" is getting a better reaction than Hitler, pretty much everything ever gets a pass.

Really, there's long been this sense that socialism and communism are one and the same, and bad. That's to an extent true. But the reality of the situation is that most modern American socialists are not socialists. They are "democratic socialists". I realize that's a slippery label, but the core idea is that capitalism is good (they would rather die than say it, but it does underpin their worldview) but you can use central government power to accelerate a certain level of redistribution (providing a floor for quality of life, but not necessarily any more than that) on top of that system. Also, you can "tame the beast" a little bit if you have enough smart rules in place for how capitalism works. And you know? I feel like that's a valid and defensible worldview/proposal, even if you disagree.

So in that lens, I'd say that modern lefties are on some level aware that socialism doesn't work. Many prominent lefties do try and think about ways to make capitalism better, even numerically! (There's a reason modern monetary economic theory is popular on the left, because it allows a capitalist framework way of making the numbers work out - oversimplified, you can just print money to uphold high social spending, as long as you are still the world's reserve currency and you take certain tax actions). It's true that you don't always get this vibe, but that's because the loudest people online are the most recent college grads who haven't followed the trajectory of economic thought maturation yet to its leftist local resting place, and still might be Marxists (for now). In short, the American political system provides a capitalism off-ramp other than actual Marxism: AOC, Bernie, Elizabeth Warren, all are variants on exactly this idea, and they have started to get a portion of power because their ideas are less crazy than the actual Marxists. They still mimic the language, because they have the heritage, and don't want to alienate young supporters, but those are not intrinsic to the voting-public appeal.

Pol Pot had the best (worst?) numbers per capita, but by absolute amount of murders he is distinctly behind Mao, Stalin, and Hitler.

Is this a joke? Please help me out here.

I don't know what gap moe means. Please explain.

I'm against stopping the use of perfectly legit turns of phrase just because AI tends to use them also, under fear that maybe someone will judge the output to be artificial.