@KolmogorovComplicity's banner p

KolmogorovComplicity


				

				

				
1 follower   follows 0 users  
joined 2022 September 04 19:51:16 UTC

				

User ID: 126

KolmogorovComplicity


				
				
				

				
1 follower   follows 0 users   joined 2022 September 04 19:51:16 UTC

					

No bio...


					

User ID: 126

If you want to use money to incentivize something requiring at least as much effort as full-time employment, you should expect to have to compensate people on a similar scale. As far as I know, no policy has come anywhere close to this yet. Before writing off carrots, try paying families 30-50% of the median personal income for each kid, every year, for the kid's entire period of minority. See what happens.

(I know, nobody wants to model parenting this way, because we like to believe it's some sacred endeavor set apart from crass commerce. But the reality is that it's in competition with the market for labor-hours, and it's in competition with everything supplied by the market as a source of utility. It benefits little from automation, so it's subject to cost disease, and becomes a little less attractive relative to alternatives that aren't every year.)

Beavers are a pretty good fit. They claim and defend territory, they build, and they live in nuclear families, eschewing larger collectives.

A fairly likely outcome is that the crazier edges of SJ will be filed off as media/political elites find they've become a liability, and the average member of Blue Tribe will simply follow along as when The Science switched from "masks don't work" to "you're a monster if you don't wear a mask on the beach." There won't be any great reckoning followed by explicit adoption of a new ideology. Any SJ gains that can fit within the "tolerance" model of '90s-style liberalism will be retained. Some true believers will carry on with the craziness, but institutions will mostly stop listening to them.

We may have just seen the start of this pivot. That's Fareed Zakaria on CNN yesterday succinctly laying out the situation on American college campuses, explicitly calling out DEI, racial quotas, the response to Floyd, the degrees in fake subjects, the implications of eliminating the SAT. The average member of Blue Tribe has never previously been presented with this narrative from a source they felt obligated to pay attention to; if Blue Tribe media now widely takes it up (which remains to be seen), it will be very easy for them to respond with "Huh, didn't know that was going on, obviously we should fix it."

BlackBerry's market cap peaked the year after the iPhone was introduced, and it took the market three or four years to really see the writing on the wall. The market still doesn't quite get tech disruption.

LLMs aren't going to remain distinct products that people have to seek out. They'll be integrated into platforms, and the natural starting point for any task, information retrieval included, will just be talking to your device. Many older people (and a surprising number of younger people, honestly) have never managed to form coherent mental models of current software UI, and thus commonly struggle to perform new or complex tasks. They'll greatly prefer this.

Most developed countries have laws that would prevent surreptitious product promotion in LLM responses. It's very possible LLMs will be harder to monetize than search, but Google isn't in a position to prevent their adoption, so that's just further bad news for them. They're essentially forced to enter this market, so others don't eat their lunch, but may be worse off than they are now even if they win it.

One of the Satanic Temple's causes is separation of church and state, and I expect part of what they're trying to do here is cause governments to decide it's too much trouble to allow holiday displays on public property at all. Vandalism of their displays, or Christians also using such displays in deliberately inflammatory ways, both make it more likely they'll get that outcome.

Meanwhile, I don't think the ideological faction represented by the Satanic Temple would actually care very much about the content of your proposed displays. If anyone did dramatically tear such a display down, it would almost certainly be some progressive activist, a distinctly different faction.

The primary reason to buy name brands isn't quality per se, but predictability. The same name brands are available nationwide, and while they do sometimes change their formulations, they tend to do so infrequently and carefully. A given generic brand is often not available everywhere (many are store-specific), stores/chains may vary which generics they carry over time, and even within a single generic brand there tends to be less focus on consistency, because what's the point in prioritizing that if you haven't got a well-known brand people have very specific expectations of?

People don't want to roll the dice on every purchase. Will this ketchup be too acidic? Will these cornflakes be a little gritty? They're willing to pay a dollar or three more to reliably get the thing they expect.

How is a young man in his twenties, armed with a useless college degree and forced to work at a supermarket to get by, supposed to find purpose in what he's doing? How can he feel accomplished, or masculine, or empowered? He definitely can't rely on God or religion for that feeling. If he tries, he'll be overwhelmed by relentless mockery and cynicism from his society.

Your grocery clerk has failed to achieve social status in a world where that was ostensibly possible, where society inculcated a belief that he should pursue it, and where he did, in fact, invest considerable effort in pursuing it, in the form of 17 years of formal education.

On top of this, he has to contend with the fact that modern societies have broken down all formal and most informal barriers to mixing across status levels and have eliminated any material requirement for women to marry. As has been discussed ad nauseam at this point, in combination with female hypergamy this is very detrimental to his prospects with the opposite sex.

A final consideration is, to borrow a Marxist term, alienation of labor. Your clerk's job does produce value, but that value isn't some tangible thing. It's a benefit to the store in higher throughput or better loss prevention vs. self-checkout, on a spreadsheet he'll never see and doesn't care about because he has no ownership stake in the enterprise.

So, your grocery clerk is probably mostly sexless, and feels like an underachiever performing meaningless work, where, say, a medieval peasant farmer at the same age would be married, would have precisely the status society told him he would and should have, and would be engaged in work that directly, physically provided for an essential material need of his wife, his children, his aging parents. It's this difference, much more than any lack of a connection with the divine, that results in his dissatisfaction.

The brain has an internal representation of the body — some tangle of neurons, presumably — that can be out of sync with the body's actual physical state. We see this pretty clearly with e.g. phantom limb syndrome.

There's no philosophical challenge for materialism here; both the brain's representation of the body and the body itself are entirely physical, as both a paper map and the territory it represents are entirely physical.

It seems worth mentioning that although trying to have general-purpose LLMs one-shot code might well be a handy benchmark of how close those LLMs are to AGI, it's a far cry from the state of the art in AI code generation. AlphaCode 2 performs at the 85th percentile vs. humans competitors despite using a base model inferior to GPT-4, by using a fine-tuned variant of that model in combination with scaffolding to help it break down problems into smaller parts and generate and select among many candidate solutions.

I've fixed the backup issue and set up better monitoring so it will yell at me if it fails again.

Important backups should also send notifications on success. Notification only on failure risks a scenario where both the backup and the notifications fail.

To be even safer, the script that sends the success notification should pull some independent confirmation the backup actually occurred, like the output of ls -l on the directory the database dumps are going to, and should include this in the notification text. Without this, a 'success' email only technically means that a particular point in a script was reached, not that a backup happened.

Of the three things banned by the Texas bill, there’s no issue at all with two. DEI departments, and compelling (profession of) belief under implicit threat of failing a class, are not forms of free speech. They’re means of enforcing ideological conformity through institutional power. They have as much right to exist under the principles of free expression as Orwell's Ministry of Truth. If woke professors or laid off DEI employees want to promote their views by, say, handing out fliers in the hallways, that's fine.

Banning tenure is a little more questionable, but even here it’s not so clear where advocates of free expression should land. This isn’t a straightforward case of tenure being banned so that the establishment can censor antiestablishment views. It's being banned, rather, by one group with institutional power (political leaders) to try to stop another group with institutional power (professors) from indoctrinating students into the dominant elite ideology. This is historically unusual because, of course, in most times and places political leaders support the dominant elite ideology.

There used to be a futurist transhumanism strain here that was more optimistic and trans-positive that has either been driven off or converted to conservative trad thinking, which is a shame.

Futurist transhumanist here. I have no objection to gender transition in principle. If I lived in The Culture and could switch literally at will, I'd probably try it for a while despite being quite comfortable as a straight, gender-conforming (nerd subtype), cis male.

However, the reality is that medical transition at the current level of technology is dangerous, expensive, irreversible, often unconvincing, and can have life-altering side-effects like sterility or permanent dependence on elaborate medical intervention. Medical transition flows from trans identity. Against this dark background, promoting the concept of trans identity, rather than simple acceptance of gender non-conformity, is irresponsible. Promoting this concept to minors as if cis and trans are just two equal choices (or trans is even better — braver, more special, etc.), is wildly irresponsible.

The fact that such a large fraction of people who present at gender transition clinics have serious mental health conditions should be a huge red flag here. A lot of people will likely choose to be thinner in a transhumanist future, but that doesn't make me want to celebrate bulimics as transhumanist pioneers.

On top of this, we've got the social demands of the trans movement. The insistence that e.g. someone who appears male and has male-typical physical abilities must nonetheless be recognized in all social respects as female doesn't fall out of technological transhumanism. I would go so far as to say it's at least somewhat at odds with it. Technological transhumanism is deeply materialist and concerned with physical intervention in the human condition. The primacy the present trans movement places on some inner essence of self-identity, incongruent with physical reality, doesn't sit comfortably within such a framework.

Open models, data sets, and training/inference code have become a pretty big thing. In general e/acc is highly favorable toward this.

There are services that help automate treasury management for smaller companies now, like Vesto.

Until last year T-Bills were paying ~nothing, and it had been that way since 2008, an eternity in the startup world. There was no direct financial incentive to do anything more complicated than park your money in a checking account. Sure, ideally everyone should have been actively managing things to hedge against bank failure, but startups have a zillion things to worry about. SVB's pitch was basically that they were experts on startup finance and would relieve you of having to worry about this yourself. The social proof of these claims was impeccable.

So, yes, many startups screwed up. It turns out that safeguarding $20M isn't entirely trivial. But it's a very predictable sort of screwup. There wasn't really anyone within their world telling them this, it wasn't part of the culture, nobody knew anyone who had been burned by it.

And, well, maybe it should be trivial to safeguard $20M? "You have to actively manage your money or there's a small chance it might disappear" is actually a pretty undesirable property for a banking system to have. The fact that it's true in the first place is a consequence of an interlocking set of government policies — the Fed doesn't allow "narrow banks" (banks that just hold your money in their Fed master accounts rather than doing anything complicated with it) and offers no central bank digital currency (so the only way to hold cash that's a direct liability of the government is to hold actual physical bills). Meanwhile the FDIC only guarantees coverage of up to $250K, a trivial amount by the standards of a business.

The net result of these policies is that the government is effectively saying "If you want to hold dollars in a practical liquid form you have to hold them in a commercial bank. We require that bank to engage in activities that carry some level of risk. We'll try to regulate that bank to make sure it doesn't blow up, but if we fail, that's your problem."

"WTF?" is a reasonable response to this state of affairs. If these companies had had the option to put their money into a narrow bank or hold it as a direct liability of the government, but had nonetheless chosen to trust it to a private bank because they were chasing higher returns, I'd have zero sympathy for them. But our system declines to make those safer options available.

The idea of running your OS in the cloud is the same old "thin client" scheme that has been the Next Big Thing for 40 years. Ever since PCs started replacing terminals, some people have been convinced we must RETVRN.

The thin client approach seems appealing for two reasons. First, it centralizes administration. Second, it allows shared use of pooled computing resources. In practice, neither of these quite works.

A platform like iOS or modern macOS actually imposes almost no per-device administrative overhead. System and app updates get installed automatically. Devices can be configured and backed up remotely. The OS lives on a "sealed" system volume where it's extremely unlikely to be compromised or corrupted. There's still some per-user administrative overhead — the configuration of a particular user's environment can be screwy — but a cloud-based OS still has per-user state, so does nothing to address this.

Pooling resources is great for cases where you want access to a lot of resources, but there's no need to go full-cloud for this. Devices that run real operating systems can access remote resources just fine. The benefit of going full-cloud is hypothetically that your end-user devices can be cheaper if they don't need the hardware to run a full OS... but the cost difference between the hardware required by a thin client and the hardware required to run a full OS is now trivial.

Meanwhile, the thin client approach will always be hobbled by connectivity, latency, bandwidth, and privacy concerns. Connectivity is especially critical on mobile, where Apple makes most of its money. Latency is especially critical in emerging categories like VR/AR, where Apple is looking to expand.

The future is more compute in the cloud and more compute at the edge. There's no structural threat to Apple here.

People on both the left and the right who are interpreting this as a strike against "leftists" or "journalists" are missing the plot, I think. Musk freaking out after some whacko followed a car with his kid in it is not great, it's not how policy should be made at Twitter, but it's not a mirror of the sort of deliberate viewpoint censorship that Twitter previously practiced. It's just not the same category of thing.

If I wanted to see memes of aichads owning artcels, where would I go? It’s really important for my mental health.

Isn't this one of those "I don't think about you at all" situations? There are many communities producing and sharing AI art without a care in the world for the people who are angry about it.

Yes. Because of, I'm pretty sure, parking.

Once a system gets bad enough, everyone with resources or agency stops using it, and then stops caring about it, leaving nobody who can effectively advocate for improvement. But, of course, this can only play out if there's a viable alternative. In most cities, cars are that alternative, even despite traffic. People are evidently willing to sit in horrible stop-and-go traffic in order to avoid using even mildly unpleasant mass transit.

What they're not willing to do, apparently, is sit in horrible stop-and-go traffic and then have to spend 45 minutes looking for an on-street parking space that might end up being half a mile from their destination. That's the situation in NYC, which, unusually for the US, has no parking space minimums for businesses or residences and so effectively has zero free parking lots. If you want to practically substitute car travel for subway travel in NYC, you need to take Uber everywhere or use paid lots. Either option is sufficiently expensive (easily upwards of $10K/year) that even most of the upper middle class opts for the subway.

It's worth keeping an eye on this, because self-driving cars could completely disrupt it, either by dropping taxi prices 50% or more or by allowing cars to drop off their owners and then go find parking on their own.

You're applying mistake theory reasoning to a position mostly held by conflict theorists. I'm not aware of a paper previously addressing this exact issue, but there have been several over the years that looked at adjacent "problems," such as women being underrepresented in computer science, and that came to similar conclusions — it's mostly lack of interest, not sexism.

In that case, explanations have been developed even further, such as by illustrating the lack of interest is largely mediated by differences along the "people/things" axis, that women tend to be more people-oriented and men tend to be more thing-oriented cross-culturally, and that differences in career choice are actually larger in more gender-egalitarian societies (probably because those societies also tend to be richer and thus career decisions are driven more by interest than income considerations).

Activists using the lack of women in computing to argue for industry sexism don't care. They continue to make their case as if none of these findings exist. When these findings are mentioned, the usual response is to call whoever points them out sexist, usually while straw-manning even the most careful claims about interest as claims about inferiority. If the discussion is taking place in a venue where that isn't enough to shut down debate, and the activists feel compelled to offer object-level argument, they'll insist that the lack of interest (which some data suggests starts at least as early as middle school) must itself somehow be downstream from industry sexism.

You'll see exactly the same thing happen here. Activists demanding more women in leadership positions will not update on these findings. Most will never hear of them, because they certainly won't circulate in activist communities. When these findings are presented, their primary response will be to throw around accusations of sexism. If they engage at the object level at all, it will be to assert that these findings merely prove pervasive sexism in society is conditioning women to be less interested in leadership.

Charitably, activists in these areas see 'equity' (i.e. equality of outcomes between groups of concern) as a valuable end in itself. Less charitably, they're simply trying to advantage themselves or their favored identity groups over others. Either way, they're not trying to build an accurate model of reality and then use that model to optimize for some general goal like human happiness or economic growth. So findings like this simply don't matter to them.

That's a lovely theory, but when it's being done by people like the above, then their attitude will be "Yeah, sure, whatever" and they will prefer playing with the shiny new toy to vague premonitions of societal something-or-other.

This tweet is a succinct summary:

Pre-2008: We’ll put the AI in a box and never let it out. Duh.

2008-2020: Unworkable! Yudkowsky broke out! AGI can convince any jail-keeper!

2021-2022: yo look i let it out lol

2023: Our Unboxing API extends shoggoth tentacles directly into your application [waitlist link]

It's clear at this point that no coherent civilizational plan will be followed to mitigate AI x-risk. Rather, the "plan" seems to be to move as fast as possible and hope we get lucky. Well, good luck everyone!

To feel magnetic lines as delicately as I can feel a breath disturb the little hairs on my arms.

This one can (sort of) be arranged:

Magnetic implant is an experimental procedure in which small, powerful magnets (such as neodymium) are inserted beneath the skin, often in the tips of fingers. [...] The magnet pushes against magnetic fields produced by electronic devices in the surrounding area, pushing against the nerves and giving a "sixth sense" of magnetic vision.

DEI nonsense probably had something to do with this, but mostly it looks like plain old "innovator's dilemma" stuff. Fear of self-disruption.

Google makes most of its money from search. Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).

Conversational AI potentially pulls users away from search, and it's not clear it really has a direct equivalent of that property. Sure, people might use conversational AI to decide what products to buy, and it should be able to detect purchase intent, but exactly what do you do with that, and how effective is it?

It's not hard to generate high-level ideas here, but none are proven. Search and conversation have different semantics. User expectations will differ. "Let advertisers pay to have the AI recommend their products over others," for instance, might not be tolerated by users, or might perform worse than search ads do for some reason. I don't know. Nobody does. Product-market fit is non-trivial (the product here being the ads).

On top of this, LLMs require a lot more compute per interaction than search.

So in pushing conversational AI, Google would have been risking a proven, massively profitable product in order bring something to market that might make less money and cost more to run.

Now, this was probably the right choice. You usually should self-disrupt, because of exactly what's happened here — failing to do so won't actually keep the disruptive product off the market, it'll just let someone else get there first. But it's really, really hard in most corporate cultures to actually pull the trigger on this.

Fortunately for Google, they've split the difference here. While they didn't ship a conversational AI product, they did develop the tech, so they can ship a product fairly quickly. They now have to fend off competition that might not even exist if they'd shipped 18 months ago, but they're in a fairly strong position to do so. Assuming, of course, the same incentives don't also cause them to slow-walk every iterative improvement in this category.

Technology has already unbundled sex and reproduction from long-term relationships, the former via porn, sex toys, contraceptive-enabled hookups, the latter via sperm/egg donation and surrogates. Schools and professional childcare can stand in for a co-parent to a substantial extent. Now LLMs will be able to simulate sustained emotional intimacy, plus you can ask them for advice, bounce ideas off of them, etc. as you would a human life partner.

That's pretty much the whole bundle of "goods and services" in a marriage-like relationship, every component now (or soon) commoditized and available for purchase in the marketplace. Perhaps quality is still lacking in some cases, but tech is far from done improving — the next decades will bring VR porn, sexbots, artificial wombs, robots that can help around the house, and more convincing chatbots.

I legitimately can't decide whether this is all deeply dystopian, or is an improvement in the human condition on the same scale as the ~300x gains in material wealth wrought by industrialization. Maybe both, somehow.

The dystopian angle is obvious. On the other side, however, consider how much human misery results from people not having access to one or more of the goods in the "marriage bundle" at the quality or in the quantity they desire. Maybe most of it, in rich countries. We're not just talking about incels. Many people who have no problem getting into relationships nonetheless find those relationships unsatisfying in important ways. Bedrooms go dead. People have fewer kids than they want. People complain their partners don't pull their weight around the house or aren't emotionally supportive. 50% of marriages end in divorce, which is bad enough to be a major suicide trigger, especially for men. Plus your partner might just up and die on you; given differences in lifespan and age at marriage, this is the expected outcome for women who don't get divorced first.

The practice of putting all your eggs in one other person's basket in order to have a bunch of your basic needs met long-term turns out poorly rather distressingly often. Maybe offering more alternatives is good, actually.

As for the fact that LLMs almost certainly lack qualia, let alone integrated internal experience, I predict some people will be very bothered by this, but many just won't care at all. They'll either find the simulation is convincing enough that they don't believe it, or it just won't be philosophically significant to them. This strikes me as one of those things like "Would Trek-style transporters kill you and replace you with an exact copy, and would it matter if they did?" where people seem to have wildly different intuitions and can't be argued around.

I don't often see people mentioning that IQ differences shouldn't imply differences in moral worth -- which suggests to me that many people here do actually have an unarticulated, possibly subconscious, belief that this is the case.

Yes, but not only IQ differences. The belief that some people have more moral worth than others is quietly common. Most people, in whatever contrived hypothetical situation we'd like to pose, would save a brilliant scientist, or a professional basketball player, or a supermodel, over someone dumb, untalented, and unattractive.

This sort of thing does not, without much more, imply genocide or eugenics. (Though support for non-coercive forms of eugenics is common around here and also quietly pretty mainstream where it's practicable and therefore people have real opinions rather than opinions chosen entirely for signaling value. The clearest present-day example is when when clients of fertility clinics choose sperm or egg donors.)

Just as a point of clarification, it's Halle Bailey who's playing Ariel in The Little Mermaid, not Halle Berry. The latter is 56; casting her to play a character who's canonically 16, and whose teenage naivety and rebelliousness are her main personality traits, would provoke a whole different culture war fracas. (Bailey is 22, and 22 playing 16 isn't unusual by Hollywood standards.)

What I'm curious to see is what they're going to do with the plot. The prince falling in love with a mute Ariel on the basis of her physical appearance and friendly, accommodating behavior, seems deeply problematic by present woke standards.