@KolmogorovComplicity's banner p

KolmogorovComplicity


				

				

				
1 follower   follows 0 users  
joined 2022 September 04 19:51:16 UTC

				

User ID: 126

KolmogorovComplicity


				
				
				

				
1 follower   follows 0 users   joined 2022 September 04 19:51:16 UTC

					

No bio...


					

User ID: 126

If you want to use money to incentivize something requiring at least as much effort as full-time employment, you should expect to have to compensate people on a similar scale. As far as I know, no policy has come anywhere close to this yet. Before writing off carrots, try paying families 30-50% of the median personal income for each kid, every year, for the kid's entire period of minority. See what happens.

(I know, nobody wants to model parenting this way, because we like to believe it's some sacred endeavor set apart from crass commerce. But the reality is that it's in competition with the market for labor-hours, and it's in competition with everything supplied by the market as a source of utility. It benefits little from automation, so it's subject to cost disease, and becomes a little less attractive relative to alternatives that aren't every year.)

BlackBerry's market cap peaked the year after the iPhone was introduced, and it took the market three or four years to really see the writing on the wall. The market still doesn't quite get tech disruption.

LLMs aren't going to remain distinct products that people have to seek out. They'll be integrated into platforms, and the natural starting point for any task, information retrieval included, will just be talking to your device. Many older people (and a surprising number of younger people, honestly) have never managed to form coherent mental models of current software UI, and thus commonly struggle to perform new or complex tasks. They'll greatly prefer this.

Most developed countries have laws that would prevent surreptitious product promotion in LLM responses. It's very possible LLMs will be harder to monetize than search, but Google isn't in a position to prevent their adoption, so that's just further bad news for them. They're essentially forced to enter this market, so others don't eat their lunch, but may be worse off than they are now even if they win it.

Beavers are a pretty good fit. They claim and defend territory, they build, and they live in nuclear families, eschewing larger collectives.

I've fixed the backup issue and set up better monitoring so it will yell at me if it fails again.

Important backups should also send notifications on success. Notification only on failure risks a scenario where both the backup and the notifications fail.

To be even safer, the script that sends the success notification should pull some independent confirmation the backup actually occurred, like the output of ls -l on the directory the database dumps are going to, and should include this in the notification text. Without this, a 'success' email only technically means that a particular point in a script was reached, not that a backup happened.

It seems worth mentioning that although trying to have general-purpose LLMs one-shot code might well be a handy benchmark of how close those LLMs are to AGI, it's a far cry from the state of the art in AI code generation. AlphaCode 2 performs at the 85th percentile vs. humans competitors despite using a base model inferior to GPT-4, by using a fine-tuned variant of that model in combination with scaffolding to help it break down problems into smaller parts and generate and select among many candidate solutions.

If I wanted to see memes of aichads owning artcels, where would I go? It’s really important for my mental health.

Isn't this one of those "I don't think about you at all" situations? There are many communities producing and sharing AI art without a care in the world for the people who are angry about it.

The primary reason to buy name brands isn't quality per se, but predictability. The same name brands are available nationwide, and while they do sometimes change their formulations, they tend to do so infrequently and carefully. A given generic brand is often not available everywhere (many are store-specific), stores/chains may vary which generics they carry over time, and even within a single generic brand there tends to be less focus on consistency, because what's the point in prioritizing that if you haven't got a well-known brand people have very specific expectations of?

People don't want to roll the dice on every purchase. Will this ketchup be too acidic? Will these cornflakes be a little gritty? They're willing to pay a dollar or three more to reliably get the thing they expect.

One of the Satanic Temple's causes is separation of church and state, and I expect part of what they're trying to do here is cause governments to decide it's too much trouble to allow holiday displays on public property at all. Vandalism of their displays, or Christians also using such displays in deliberately inflammatory ways, both make it more likely they'll get that outcome.

Meanwhile, I don't think the ideological faction represented by the Satanic Temple would actually care very much about the content of your proposed displays. If anyone did dramatically tear such a display down, it would almost certainly be some progressive activist, a distinctly different faction.

To feel magnetic lines as delicately as I can feel a breath disturb the little hairs on my arms.

This one can (sort of) be arranged:

Magnetic implant is an experimental procedure in which small, powerful magnets (such as neodymium) are inserted beneath the skin, often in the tips of fingers. [...] The magnet pushes against magnetic fields produced by electronic devices in the surrounding area, pushing against the nerves and giving a "sixth sense" of magnetic vision.

The brain has an internal representation of the body — some tangle of neurons, presumably — that can be out of sync with the body's actual physical state. We see this pretty clearly with e.g. phantom limb syndrome.

There's no philosophical challenge for materialism here; both the brain's representation of the body and the body itself are entirely physical, as both a paper map and the territory it represents are entirely physical.

A fairly likely outcome is that the crazier edges of SJ will be filed off as media/political elites find they've become a liability, and the average member of Blue Tribe will simply follow along as when The Science switched from "masks don't work" to "you're a monster if you don't wear a mask on the beach." There won't be any great reckoning followed by explicit adoption of a new ideology. Any SJ gains that can fit within the "tolerance" model of '90s-style liberalism will be retained. Some true believers will carry on with the craziness, but institutions will mostly stop listening to them.

We may have just seen the start of this pivot. That's Fareed Zakaria on CNN yesterday succinctly laying out the situation on American college campuses, explicitly calling out DEI, racial quotas, the response to Floyd, the degrees in fake subjects, the implications of eliminating the SAT. The average member of Blue Tribe has never previously been presented with this narrative from a source they felt obligated to pay attention to; if Blue Tribe media now widely takes it up (which remains to be seen), it will be very easy for them to respond with "Huh, didn't know that was going on, obviously we should fix it."

Open models, data sets, and training/inference code have become a pretty big thing. In general e/acc is highly favorable toward this.

How is a young man in his twenties, armed with a useless college degree and forced to work at a supermarket to get by, supposed to find purpose in what he's doing? How can he feel accomplished, or masculine, or empowered? He definitely can't rely on God or religion for that feeling. If he tries, he'll be overwhelmed by relentless mockery and cynicism from his society.

Your grocery clerk has failed to achieve social status in a world where that was ostensibly possible, where society inculcated a belief that he should pursue it, and where he did, in fact, invest considerable effort in pursuing it, in the form of 17 years of formal education.

On top of this, he has to contend with the fact that modern societies have broken down all formal and most informal barriers to mixing across status levels and have eliminated any material requirement for women to marry. As has been discussed ad nauseam at this point, in combination with female hypergamy this is very detrimental to his prospects with the opposite sex.

A final consideration is, to borrow a Marxist term, alienation of labor. Your clerk's job does produce value, but that value isn't some tangible thing. It's a benefit to the store in higher throughput or better loss prevention vs. self-checkout, on a spreadsheet he'll never see and doesn't care about because he has no ownership stake in the enterprise.

So, your grocery clerk is probably mostly sexless, and feels like an underachiever performing meaningless work, where, say, a medieval peasant farmer at the same age would be married, would have precisely the status society told him he would and should have, and would be engaged in work that directly, physically provided for an essential material need of his wife, his children, his aging parents. It's this difference, much more than any lack of a connection with the divine, that results in his dissatisfaction.

The idea of running your OS in the cloud is the same old "thin client" scheme that has been the Next Big Thing for 40 years. Ever since PCs started replacing terminals, some people have been convinced we must RETVRN.

The thin client approach seems appealing for two reasons. First, it centralizes administration. Second, it allows shared use of pooled computing resources. In practice, neither of these quite works.

A platform like iOS or modern macOS actually imposes almost no per-device administrative overhead. System and app updates get installed automatically. Devices can be configured and backed up remotely. The OS lives on a "sealed" system volume where it's extremely unlikely to be compromised or corrupted. There's still some per-user administrative overhead — the configuration of a particular user's environment can be screwy — but a cloud-based OS still has per-user state, so does nothing to address this.

Pooling resources is great for cases where you want access to a lot of resources, but there's no need to go full-cloud for this. Devices that run real operating systems can access remote resources just fine. The benefit of going full-cloud is hypothetically that your end-user devices can be cheaper if they don't need the hardware to run a full OS... but the cost difference between the hardware required by a thin client and the hardware required to run a full OS is now trivial.

Meanwhile, the thin client approach will always be hobbled by connectivity, latency, bandwidth, and privacy concerns. Connectivity is especially critical on mobile, where Apple makes most of its money. Latency is especially critical in emerging categories like VR/AR, where Apple is looking to expand.

The future is more compute in the cloud and more compute at the edge. There's no structural threat to Apple here.

Of the three things banned by the Texas bill, there’s no issue at all with two. DEI departments, and compelling (profession of) belief under implicit threat of failing a class, are not forms of free speech. They’re means of enforcing ideological conformity through institutional power. They have as much right to exist under the principles of free expression as Orwell's Ministry of Truth. If woke professors or laid off DEI employees want to promote their views by, say, handing out fliers in the hallways, that's fine.

Banning tenure is a little more questionable, but even here it’s not so clear where advocates of free expression should land. This isn’t a straightforward case of tenure being banned so that the establishment can censor antiestablishment views. It's being banned, rather, by one group with institutional power (political leaders) to try to stop another group with institutional power (professors) from indoctrinating students into the dominant elite ideology. This is historically unusual because, of course, in most times and places political leaders support the dominant elite ideology.

Manual labor jobs are more resistant to GPT-4 than email jobs are, but they're not meaningfully resistant to actual AGI. A lot of the incapacity of our current robotics tech is on the software side, which AGI definitionally fixes. Advanced robots are presently expensive primarily because they're low-volume specialty items, which won't be true if smarter software suddenly allows them to perform far more tasks. A few years later you'll have robots building more robots with no human labor input, an exponential process which leads to hilarious outcomes like economic output doubling every month or two.

This isn't just a matter of tweaking some tax policies. Our reference class for something like AGI should be more like the transition into industrial capitalism, except much faster, and on a much larger absolute scale. Humans may survive; I'm not entirely persuaded by arguments to the contrary. Existing forms of social organization almost certainly won't. Thinking we'll fix this up with UBI or public works employment or even Fully Automated Luxury Communism is like a feudal king thinking he'll deal with industrial capitalism by treating factories like farmland and handing them out to loyal vassals.

That's a lovely theory, but when it's being done by people like the above, then their attitude will be "Yeah, sure, whatever" and they will prefer playing with the shiny new toy to vague premonitions of societal something-or-other.

This tweet is a succinct summary:

Pre-2008: We’ll put the AI in a box and never let it out. Duh.

2008-2020: Unworkable! Yudkowsky broke out! AGI can convince any jail-keeper!

2021-2022: yo look i let it out lol

2023: Our Unboxing API extends shoggoth tentacles directly into your application [waitlist link]

It's clear at this point that no coherent civilizational plan will be followed to mitigate AI x-risk. Rather, the "plan" seems to be to move as fast as possible and hope we get lucky. Well, good luck everyone!

Yes. Because of, I'm pretty sure, parking.

Once a system gets bad enough, everyone with resources or agency stops using it, and then stops caring about it, leaving nobody who can effectively advocate for improvement. But, of course, this can only play out if there's a viable alternative. In most cities, cars are that alternative, even despite traffic. People are evidently willing to sit in horrible stop-and-go traffic in order to avoid using even mildly unpleasant mass transit.

What they're not willing to do, apparently, is sit in horrible stop-and-go traffic and then have to spend 45 minutes looking for an on-street parking space that might end up being half a mile from their destination. That's the situation in NYC, which, unusually for the US, has no parking space minimums for businesses or residences and so effectively has zero free parking lots. If you want to practically substitute car travel for subway travel in NYC, you need to take Uber everywhere or use paid lots. Either option is sufficiently expensive (easily upwards of $10K/year) that even most of the upper middle class opts for the subway.

It's worth keeping an eye on this, because self-driving cars could completely disrupt it, either by dropping taxi prices 50% or more or by allowing cars to drop off their owners and then go find parking on their own.

both will stay incredibly low-status.

The thing is, there's a whole framework in place now for fighting this. Being gay used to be incredibly low-status. Being trans used to be incredibly low-status. Poly, kink, asexuality, etc. The dominant elite culture now says you're required to regard these as neutral at worst, and ideally as brave examples of self-actualization.

The robosexuals are absolutely going to try to claim a place within this framework and demand that people respect their preferences. Elite sexual morality has, at least formally, jettisoned every precept except consent, and there's not much of an argument against this on that basis.

Probably for reasons similar to yours: status tends to play a somewhat bigger role in women's mate choices than in men's, and synths will always be very low status.

Hmm. What if there are designer models of synth that you have to know the right people to get? Would a women conceptualize such a synth as high status in its own right, or merely as a reflection of her own status? Maybe if the designer is a high status man and hand-picks which women can have the synths he designs, some of his status transfers to those synths?

There are some weird, unexplored corners to this issue.

Men tend to like sexual variety, so I'd expect even if the synths are pretty mind-blowing, most men will still be willing to sleep with real women just for a change of pace.

Whether they'll be able to have emotionally intimate relationships with real women is another matter, but if anything I'd be more concerned about that in the other direction. Women often complain that men aren't as emotionally expressive or supportive as they'd prefer. A GPT-4-class LLM that had been RLHF'ed into playing the male lead from a romance novel might already achieve superhuman performance on this task.

Technology has already unbundled sex and reproduction from long-term relationships, the former via porn, sex toys, contraceptive-enabled hookups, the latter via sperm/egg donation and surrogates. Schools and professional childcare can stand in for a co-parent to a substantial extent. Now LLMs will be able to simulate sustained emotional intimacy, plus you can ask them for advice, bounce ideas off of them, etc. as you would a human life partner.

That's pretty much the whole bundle of "goods and services" in a marriage-like relationship, every component now (or soon) commoditized and available for purchase in the marketplace. Perhaps quality is still lacking in some cases, but tech is far from done improving — the next decades will bring VR porn, sexbots, artificial wombs, robots that can help around the house, and more convincing chatbots.

I legitimately can't decide whether this is all deeply dystopian, or is an improvement in the human condition on the same scale as the ~300x gains in material wealth wrought by industrialization. Maybe both, somehow.

The dystopian angle is obvious. On the other side, however, consider how much human misery results from people not having access to one or more of the goods in the "marriage bundle" at the quality or in the quantity they desire. Maybe most of it, in rich countries. We're not just talking about incels. Many people who have no problem getting into relationships nonetheless find those relationships unsatisfying in important ways. Bedrooms go dead. People have fewer kids than they want. People complain their partners don't pull their weight around the house or aren't emotionally supportive. 50% of marriages end in divorce, which is bad enough to be a major suicide trigger, especially for men. Plus your partner might just up and die on you; given differences in lifespan and age at marriage, this is the expected outcome for women who don't get divorced first.

The practice of putting all your eggs in one other person's basket in order to have a bunch of your basic needs met long-term turns out poorly rather distressingly often. Maybe offering more alternatives is good, actually.

As for the fact that LLMs almost certainly lack qualia, let alone integrated internal experience, I predict some people will be very bothered by this, but many just won't care at all. They'll either find the simulation is convincing enough that they don't believe it, or it just won't be philosophically significant to them. This strikes me as one of those things like "Would Trek-style transporters kill you and replace you with an exact copy, and would it matter if they did?" where people seem to have wildly different intuitions and can't be argued around.

Okay, now I have to try some of the prompts from the "Trurl's Electronic Bard" story in Lem's The Cyberiad out on Bing. Well, from the English translation.

Prompt:

Compose a poem—a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter s!

Bing response:

Shears snip swiftly, slicing strands

Salon smells sour, stings senses

Stylist smiles smugly, says "surprise!"

Shocked, she shrieks, shuns scissors

Sorrowful, she sobs, seeks solace

Someday, she'll sue, secure satisfaction

Wow. That doesn't follow every instruction and it doesn't really hold a candle to what the machine writes in the story:

Seduced, shaggy Samson snored.

She scissored short. Sorely shorn,

Soon shackled slave, Samson sighed,

Silently scheming,

Sightlessly seeking

Some savage, spectacular suicide.

But it's considerably more clever than I was expecting or, I think, than what the average human could write on short notice. Fitting any coherent narrative into "six lines, every word beginning with the letter s" is pretty tricky already, and on top of that it checks off the haircut, the treachery, and the retribution.

Commercial banks could offer higher interest rates on deposits, lend out their own capital, or issue bonds. If this didn't provide sufficient funding for whatever amount of lending the government wanted to see, the government itself could loan money to banks to re-lend.

Really though, the easiest patch to the system would just be for FDIC insurance to (officially) cover unlimited balances, or at least scale high enough that only the largest organizations had to worry about it. It makes no sense to require millions of entities (if you include individuals of moderate net worth) to constantly juggle funds to guard against a very small chance of a catastrophic outcome that most of them aren't well positioned to evaluate the probability of. That's exactly the sort of risk insurance is for.

If the concern is that this will create moral hazard because banks that take more risks will be able to pay higher interest rates and fully-insured depositors will have no reason to avoid them, the solution is just for regulators to limit depository institutions to only taking on risks the government is comfortable insuring against. Individuals should be allowed to take on risk to chase returns, but there's no compelling reason to offer this sort of exposure through deposit accounts in particular. Doing so runs contrary to the way most people mentally model them or wish to use them.

There are services that help automate treasury management for smaller companies now, like Vesto.

Until last year T-Bills were paying ~nothing, and it had been that way since 2008, an eternity in the startup world. There was no direct financial incentive to do anything more complicated than park your money in a checking account. Sure, ideally everyone should have been actively managing things to hedge against bank failure, but startups have a zillion things to worry about. SVB's pitch was basically that they were experts on startup finance and would relieve you of having to worry about this yourself. The social proof of these claims was impeccable.

So, yes, many startups screwed up. It turns out that safeguarding $20M isn't entirely trivial. But it's a very predictable sort of screwup. There wasn't really anyone within their world telling them this, it wasn't part of the culture, nobody knew anyone who had been burned by it.

And, well, maybe it should be trivial to safeguard $20M? "You have to actively manage your money or there's a small chance it might disappear" is actually a pretty undesirable property for a banking system to have. The fact that it's true in the first place is a consequence of an interlocking set of government policies — the Fed doesn't allow "narrow banks" (banks that just hold your money in their Fed master accounts rather than doing anything complicated with it) and offers no central bank digital currency (so the only way to hold cash that's a direct liability of the government is to hold actual physical bills). Meanwhile the FDIC only guarantees coverage of up to $250K, a trivial amount by the standards of a business.

The net result of these policies is that the government is effectively saying "If you want to hold dollars in a practical liquid form you have to hold them in a commercial bank. We require that bank to engage in activities that carry some level of risk. We'll try to regulate that bank to make sure it doesn't blow up, but if we fail, that's your problem."

"WTF?" is a reasonable response to this state of affairs. If these companies had had the option to put their money into a narrow bank or hold it as a direct liability of the government, but had nonetheless chosen to trust it to a private bank because they were chasing higher returns, I'd have zero sympathy for them. But our system declines to make those safer options available.