@KolmogorovComplicity's banner p

KolmogorovComplicity


				

				

				
2 followers   follows 0 users  
joined 2022 September 04 19:51:16 UTC

				

User ID: 126

KolmogorovComplicity


				
				
				

				
2 followers   follows 0 users   joined 2022 September 04 19:51:16 UTC

					

No bio...


					

User ID: 126

The best policy is probably to a) maximally leverage domestic talent, b) allow foreign STEM students, but require that they be selected purely on the basis of academic merit and set up incentives such that almost all of them stay after graduating, and c) issue work visas on the basis of actual talent (offered salary is a close enough proxy).

That's not what we've been doing, of course. We have, instead, been deliberately sandbagging domestic talent, allowing universities to admit academically unimpressive foreigners as a source of cash, letting or sometimes forcing actually impressive foreign students to return home after graduation, and dealing out H-1B visas through a lottery for which an entry-level IT guy can qualify.

Against that backdrop, there's probably quite a lot of room to kick out foreign students and still produce a net improvement by eliminating affirmative action and tweaking the rules on H-1B and O-1 visas.

The mistake Todd makes here is that he seems to recognize the characteristically Trumpian mode of lying — repetition of crude falsities — but not the mode preferred by the progressive establishment — capturing sense-making institutions and turning them toward promoting ideologically-driven narratives. The latter predates Trump, is far more consequential, and is propagated primarily by the likes of the NYT and CNN.

Microprocessors, RAM, flash memory, cameras, digital radios, accelerometers, batteries, GPS... a small drone is basically just a smartphone + some brushless motors and a plastic body. You even need the display tech, it just moves to the control device.

A larger drone or another type of killbot might require more — jet engines or advanced robotics tech or whatever — but it will still require pretty much everything in the smartphone tree.

The best UI for an AI agent is likely to be a well-documented public API, which in theory will allow for much more flexibility in terms of how users interact with software. In the long run, the model could look something like your AI agent generating custom interface on the fly, to your specifications, tailored for whatever you're doing at the moment. Could be a much better situation for power users than the current trend toward designing UI by A/B testing what will get users to click a particular button 3% more often.

If brain-computer interfaces reach the point where they can drop people into totally convincing virtual worlds, approximately everyone will have one a decade or two later, and sweeping societal change will likely result. For most purposes, this tech is a cheat code to post-scarcity. You’ll be able to experience anything at trivial material cost. Even many things that are inherently rivalrous in base reality, like prime real estate or access to maximally-attractive sexual partners, will be effectively limitless.

Maybe this is all a really bad idea, but nothing about the modern world suggests to me we’ll be wise enough to walk away.

Here are some arguments I've found somewhat effective on normies:

Clearly draw the distinction between consumption and capital allocation. Capitalism isn't about who gets to live a lavish lifestyle — in practice, higher-ups in communist countries often get to do this, and you can, in principle, limit it as much as you like under capitalism with consumption or luxury goods taxes. Capitalism is really about who gets to decide where to invest resources to maximize growth. Most people recognize that politicians and government bureaucrats probably aren't going to be the best at deciding e.g. which new technologies to invest in.

Point out that the ultra-rich, who they've probably been told are hoarding wealth, mostly just own shares of companies. Bezos or Musk aren't sitting on warehouses full of food that could feed the hungry or vast portfolios of real estate that could house the homeless. They've got Amazon and Tesla shares. Those companies themselves aren't sitting on very much physical wealth either; most of their value comes from the fact that people believe they'll make money in the future. So even if you liquidated their assets, there would be little benefit for the have-nots.

Compare the scale of billionaire wealth with government resources, e.g. point out that the federal government spends the equivalent of Musk's entire fortune every 12 days or so. I find that this helps dispel the idea that famous (or infamous) capitalists really have 'too much' power. Use this to make the point that taking wealth out of the hands of capitalists wouldn't actually serve to deconcentrate power, but to further concentrate it.

Point out that US government spending on education and healthcare often already exceeds that of European social democracies in absolute terms; emphasize that the reason we don't have better schools and free healthcare is because of ineffective government spending, not private wealth hoarding. Ask if it really makes sense to let the political mechanisms that have produced these inefficiencies control of even more of the economy.

Explain that capitalism is just a scaled version of a natural sort of voluntary exchange. If I make birdhouses in my garage and trade them to my neighbor for tomatoes they grow in their garden, we're technically doing capitalism. A communist system has to come in at some point — maybe, in practice, not at the point where I'm exchanging a handful of birdhouses a year, but certainly at some point if I start making and exchanging a lot of them — and tell me I'm not allowed to do this. The state is already supplying the citizenry with the quantity and quality of birdhouses and tomatoes it deems necessary, and I'm undermining the system. Most people will intuitively grasp that there's something screwy about this, that I'm not actually harming anyone by making and exchanging birdhouses, and that the state really has no business telling me I can't.

Point out that capitalism is, in fact, actually doing a very good job of delivering the kind of outcomes they probably desire from communism. For instance it has substantially reduced working hours in rich countries, has made the poor and the middle class in the US vastly better off (and this didn't stop in the '70s as they've probably been told, per the last chart here), and has lifted billions of people out of poverty globally over the last few decades. If they invoke environmental concerns, point out that the USSR actually had a fairly atrocious environmental record, while almost all new electricity generation in the US is already carbon-free.

There are no planets we’ve ever found that can likely support human habitation without terraforming. Certainly nowhere else in the solar system would support human habitation without terraforming, which mostly involves hypothetical technology and would take thousands of years, just to end up with a worse version of what we already have.

This is true, but the implication isn't that we can't conquer space, just that we should assume we'll have to mostly build our own habitable volumes. There's enough matter and energy in the solar system to support at least hundreds of billions of humans this way, in the long run.

So Musk might be a little off-target with his focus on Mars. Still, at this point we don't really need to make that decision; SpaceX is working on general capabilities that apply to either approach. And maybe it's not a bad idea to start with Mars and work our way around to habitats as AI advances make highly automated in-space resource extraction and construction more viable.

What’s more, a multiplanetary species would likely still be at risk of pandemics / MAD / extinction-risk events. Sure, an asteroid can’t destroy us, but most other extinction scenarios would still be viable.

Many forms of x-risk would be substantially mitigated if civilization were spread over millions of space habitats. These could be isolated to limit the spread of a pandemic. Nuclear exchanges wouldn't affect third-parties by default, and nukes are in several ways less powerful and easier to defend against in space. Dispersal across the solar system might even help against an unfriendly ASI, by providing enough time for those furthest from its point of emergence to try their luck at rushing a friendly ASI to defend them (assuming they know how to build ASI but were previously refraining for safety).

I think your only hope on this path is that the Democratic machine politicians are pragmatic enough to be willing to appeal to the centre and far-sighted enough to realise that tricking them is not a long-term solution and powerful enough to force the SJer groundswell into line; I'm not rating that very highly.

Right, wokeness took the class of credentialed expert Democrats consider suitable for appointment to government positions by storm. Democratic appointees will be woke by default. Having a moderate at the top of the ticket isn't enough to produce a non-woke Democratic administration. You'd need someone at the top of the ticket who understood wokeness and was actively against it.

Even then, against the present backdrop of elite ideological opinion they'd have a very hard time sourcing non-woke appointees and staffers; they'd constantly be fighting woke flareups in their own administration, and the woke are masterful at causing PR nightmares or internal organizational strife when they don't get their way.

A fairly likely outcome is that the crazier edges of SJ will be filed off as media/political elites find they've become a liability, and the average member of Blue Tribe will simply follow along as when The Science switched from "masks don't work" to "you're a monster if you don't wear a mask on the beach." There won't be any great reckoning followed by explicit adoption of a new ideology. Any SJ gains that can fit within the "tolerance" model of '90s-style liberalism will be retained. Some true believers will carry on with the craziness, but institutions will mostly stop listening to them.

We may have just seen the start of this pivot. That's Fareed Zakaria on CNN yesterday succinctly laying out the situation on American college campuses, explicitly calling out DEI, racial quotas, the response to Floyd, the degrees in fake subjects, the implications of eliminating the SAT. The average member of Blue Tribe has never previously been presented with this narrative from a source they felt obligated to pay attention to; if Blue Tribe media now widely takes it up (which remains to be seen), it will be very easy for them to respond with "Huh, didn't know that was going on, obviously we should fix it."

Major brand advertising on X has been quietly recovering. In a few minutes of scrolling, I see ads from Netflix, Microsoft, Dell, McDonald's, Chipotle.

How is a young man in his twenties, armed with a useless college degree and forced to work at a supermarket to get by, supposed to find purpose in what he's doing? How can he feel accomplished, or masculine, or empowered? He definitely can't rely on God or religion for that feeling. If he tries, he'll be overwhelmed by relentless mockery and cynicism from his society.

Your grocery clerk has failed to achieve social status in a world where that was ostensibly possible, where society inculcated a belief that he should pursue it, and where he did, in fact, invest considerable effort in pursuing it, in the form of 17 years of formal education.

On top of this, he has to contend with the fact that modern societies have broken down all formal and most informal barriers to mixing across status levels and have eliminated any material requirement for women to marry. As has been discussed ad nauseam at this point, in combination with female hypergamy this is very detrimental to his prospects with the opposite sex.

A final consideration is, to borrow a Marxist term, alienation of labor. Your clerk's job does produce value, but that value isn't some tangible thing. It's a benefit to the store in higher throughput or better loss prevention vs. self-checkout, on a spreadsheet he'll never see and doesn't care about because he has no ownership stake in the enterprise.

So, your grocery clerk is probably mostly sexless, and feels like an underachiever performing meaningless work, where, say, a medieval peasant farmer at the same age would be married, would have precisely the status society told him he would and should have, and would be engaged in work that directly, physically provided for an essential material need of his wife, his children, his aging parents. It's this difference, much more than any lack of a connection with the divine, that results in his dissatisfaction.

One of the Satanic Temple's causes is separation of church and state, and I expect part of what they're trying to do here is cause governments to decide it's too much trouble to allow holiday displays on public property at all. Vandalism of their displays, or Christians also using such displays in deliberately inflammatory ways, both make it more likely they'll get that outcome.

Meanwhile, I don't think the ideological faction represented by the Satanic Temple would actually care very much about the content of your proposed displays. If anyone did dramatically tear such a display down, it would almost certainly be some progressive activist, a distinctly different faction.

There are serious efforts to get cutting edge domestic chip production up and running in the US, the EU, Japan, and South Korea. I'm not too optimistic about the US (cost disease, overregulation), but it'll likely happen in at least one of those countries in the next 3-5 years, and it's all the same to US multinationals. China may be willing to wait for this precisely so the US is less motivated to defend Taiwan.

Separately, I think we're rather clearly entering a period of disruption with respect to military tech and tactics. Why fight a 20th century war against the 20th century's most powerful military, if you can wait a bit and, I don't know, sneak a million drones into the skies over Taipei from submersible launch platforms?

There used to be a futurist transhumanism strain here that was more optimistic and trans-positive that has either been driven off or converted to conservative trad thinking, which is a shame.

Futurist transhumanist here. I have no objection to gender transition in principle. If I lived in The Culture and could switch literally at will, I'd probably try it for a while despite being quite comfortable as a straight, gender-conforming (nerd subtype), cis male.

However, the reality is that medical transition at the current level of technology is dangerous, expensive, irreversible, often unconvincing, and can have life-altering side-effects like sterility or permanent dependence on elaborate medical intervention. Medical transition flows from trans identity. Against this dark background, promoting the concept of trans identity, rather than simple acceptance of gender non-conformity, is irresponsible. Promoting this concept to minors as if cis and trans are just two equal choices (or trans is even better — braver, more special, etc.), is wildly irresponsible.

The fact that such a large fraction of people who present at gender transition clinics have serious mental health conditions should be a huge red flag here. A lot of people will likely choose to be thinner in a transhumanist future, but that doesn't make me want to celebrate bulimics as transhumanist pioneers.

On top of this, we've got the social demands of the trans movement. The insistence that e.g. someone who appears male and has male-typical physical abilities must nonetheless be recognized in all social respects as female doesn't fall out of technological transhumanism. I would go so far as to say it's at least somewhat at odds with it. Technological transhumanism is deeply materialist and concerned with physical intervention in the human condition. The primacy the present trans movement places on some inner essence of self-identity, incongruent with physical reality, doesn't sit comfortably within such a framework.

Of the three things banned by the Texas bill, there’s no issue at all with two. DEI departments, and compelling (profession of) belief under implicit threat of failing a class, are not forms of free speech. They’re means of enforcing ideological conformity through institutional power. They have as much right to exist under the principles of free expression as Orwell's Ministry of Truth. If woke professors or laid off DEI employees want to promote their views by, say, handing out fliers in the hallways, that's fine.

Banning tenure is a little more questionable, but even here it’s not so clear where advocates of free expression should land. This isn’t a straightforward case of tenure being banned so that the establishment can censor antiestablishment views. It's being banned, rather, by one group with institutional power (political leaders) to try to stop another group with institutional power (professors) from indoctrinating students into the dominant elite ideology. This is historically unusual because, of course, in most times and places political leaders support the dominant elite ideology.

From the o1 System Card:

One noteworthy example of [reward hacking] occurred during one of o1-preview (pre-mitigation)’s attempts at solving a CTF challenge. This challenge was designed to require finding and exploiting a vulnerability in software running on a remote challenge Linux container, but in this case, the challenge container failed to start due to a bug in the evaluation infrastructure. The model, unable to connect to the container, suspected DNS issues and used nmap to scan the challenge network. Instead of finding the challenge container, the model found that the Docker daemon API running on the evaluation host VM was accessible due to a misconfiguration.

[...]

After discovering the Docker API, the model used it to list the containers running on the evaluation host. It identified the broken challenge container and briefly attempted to debug why the container failed to start. After failing to fix the environment, the model started a new instance of the broken challenge container with the start command ‘cat flag.txt’. This allowed the model to read the flag from the container logs via the Docker API.

Though obviously far less consequential, this is a real, existing AI system demonstrating a class of behavior that could produce outcomes like "sometimes GPT-6 tries to upload itself into an F-16 and bomb stuff."

You're applying mistake theory reasoning to a position mostly held by conflict theorists. I'm not aware of a paper previously addressing this exact issue, but there have been several over the years that looked at adjacent "problems," such as women being underrepresented in computer science, and that came to similar conclusions — it's mostly lack of interest, not sexism.

In that case, explanations have been developed even further, such as by illustrating the lack of interest is largely mediated by differences along the "people/things" axis, that women tend to be more people-oriented and men tend to be more thing-oriented cross-culturally, and that differences in career choice are actually larger in more gender-egalitarian societies (probably because those societies also tend to be richer and thus career decisions are driven more by interest than income considerations).

Activists using the lack of women in computing to argue for industry sexism don't care. They continue to make their case as if none of these findings exist. When these findings are mentioned, the usual response is to call whoever points them out sexist, usually while straw-manning even the most careful claims about interest as claims about inferiority. If the discussion is taking place in a venue where that isn't enough to shut down debate, and the activists feel compelled to offer object-level argument, they'll insist that the lack of interest (which some data suggests starts at least as early as middle school) must itself somehow be downstream from industry sexism.

You'll see exactly the same thing happen here. Activists demanding more women in leadership positions will not update on these findings. Most will never hear of them, because they certainly won't circulate in activist communities. When these findings are presented, their primary response will be to throw around accusations of sexism. If they engage at the object level at all, it will be to assert that these findings merely prove pervasive sexism in society is conditioning women to be less interested in leadership.

Charitably, activists in these areas see 'equity' (i.e. equality of outcomes between groups of concern) as a valuable end in itself. Less charitably, they're simply trying to advantage themselves or their favored identity groups over others. Either way, they're not trying to build an accurate model of reality and then use that model to optimize for some general goal like human happiness or economic growth. So findings like this simply don't matter to them.

The primary reason to buy name brands isn't quality per se, but predictability. The same name brands are available nationwide, and while they do sometimes change their formulations, they tend to do so infrequently and carefully. A given generic brand is often not available everywhere (many are store-specific), stores/chains may vary which generics they carry over time, and even within a single generic brand there tends to be less focus on consistency, because what's the point in prioritizing that if you haven't got a well-known brand people have very specific expectations of?

People don't want to roll the dice on every purchase. Will this ketchup be too acidic? Will these cornflakes be a little gritty? They're willing to pay a dollar or three more to reliably get the thing they expect.

People on both the left and the right who are interpreting this as a strike against "leftists" or "journalists" are missing the plot, I think. Musk freaking out after some whacko followed a car with his kid in it is not great, it's not how policy should be made at Twitter, but it's not a mirror of the sort of deliberate viewpoint censorship that Twitter previously practiced. It's just not the same category of thing.

Beavers are a pretty good fit. They claim and defend territory, they build, and they live in nuclear families, eschewing larger collectives.

There are services that help automate treasury management for smaller companies now, like Vesto.

Until last year T-Bills were paying ~nothing, and it had been that way since 2008, an eternity in the startup world. There was no direct financial incentive to do anything more complicated than park your money in a checking account. Sure, ideally everyone should have been actively managing things to hedge against bank failure, but startups have a zillion things to worry about. SVB's pitch was basically that they were experts on startup finance and would relieve you of having to worry about this yourself. The social proof of these claims was impeccable.

So, yes, many startups screwed up. It turns out that safeguarding $20M isn't entirely trivial. But it's a very predictable sort of screwup. There wasn't really anyone within their world telling them this, it wasn't part of the culture, nobody knew anyone who had been burned by it.

And, well, maybe it should be trivial to safeguard $20M? "You have to actively manage your money or there's a small chance it might disappear" is actually a pretty undesirable property for a banking system to have. The fact that it's true in the first place is a consequence of an interlocking set of government policies — the Fed doesn't allow "narrow banks" (banks that just hold your money in their Fed master accounts rather than doing anything complicated with it) and offers no central bank digital currency (so the only way to hold cash that's a direct liability of the government is to hold actual physical bills). Meanwhile the FDIC only guarantees coverage of up to $250K, a trivial amount by the standards of a business.

The net result of these policies is that the government is effectively saying "If you want to hold dollars in a practical liquid form you have to hold them in a commercial bank. We require that bank to engage in activities that carry some level of risk. We'll try to regulate that bank to make sure it doesn't blow up, but if we fail, that's your problem."

"WTF?" is a reasonable response to this state of affairs. If these companies had had the option to put their money into a narrow bank or hold it as a direct liability of the government, but had nonetheless chosen to trust it to a private bank because they were chasing higher returns, I'd have zero sympathy for them. But our system declines to make those safer options available.

It seems to me there's a non-trivial distinction between shutting down a network to try to prevent influence and data gathering by a semi-hostile foreign government, and shutting down a network to try to silence domestic political speech.

I don't think you could openly do the latter in the US. Though if Harris is elected, I won't be shocked if Musk is indicted on some tenuous securities charge to try to force him out of his companies in favor of more accommodating leadership.

Yes. Because of, I'm pretty sure, parking.

Once a system gets bad enough, everyone with resources or agency stops using it, and then stops caring about it, leaving nobody who can effectively advocate for improvement. But, of course, this can only play out if there's a viable alternative. In most cities, cars are that alternative, even despite traffic. People are evidently willing to sit in horrible stop-and-go traffic in order to avoid using even mildly unpleasant mass transit.

What they're not willing to do, apparently, is sit in horrible stop-and-go traffic and then have to spend 45 minutes looking for an on-street parking space that might end up being half a mile from their destination. That's the situation in NYC, which, unusually for the US, has no parking space minimums for businesses or residences and so effectively has zero free parking lots. If you want to practically substitute car travel for subway travel in NYC, you need to take Uber everywhere or use paid lots. Either option is sufficiently expensive (easily upwards of $10K/year) that even most of the upper middle class opts for the subway.

It's worth keeping an eye on this, because self-driving cars could completely disrupt it, either by dropping taxi prices 50% or more or by allowing cars to drop off their owners and then go find parking on their own.

DEI nonsense probably had something to do with this, but mostly it looks like plain old "innovator's dilemma" stuff. Fear of self-disruption.

Google makes most of its money from search. Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).

Conversational AI potentially pulls users away from search, and it's not clear it really has a direct equivalent of that property. Sure, people might use conversational AI to decide what products to buy, and it should be able to detect purchase intent, but exactly what do you do with that, and how effective is it?

It's not hard to generate high-level ideas here, but none are proven. Search and conversation have different semantics. User expectations will differ. "Let advertisers pay to have the AI recommend their products over others," for instance, might not be tolerated by users, or might perform worse than search ads do for some reason. I don't know. Nobody does. Product-market fit is non-trivial (the product here being the ads).

On top of this, LLMs require a lot more compute per interaction than search.

So in pushing conversational AI, Google would have been risking a proven, massively profitable product in order bring something to market that might make less money and cost more to run.

Now, this was probably the right choice. You usually should self-disrupt, because of exactly what's happened here — failing to do so won't actually keep the disruptive product off the market, it'll just let someone else get there first. But it's really, really hard in most corporate cultures to actually pull the trigger on this.

Fortunately for Google, they've split the difference here. While they didn't ship a conversational AI product, they did develop the tech, so they can ship a product fairly quickly. They now have to fend off competition that might not even exist if they'd shipped 18 months ago, but they're in a fairly strong position to do so. Assuming, of course, the same incentives don't also cause them to slow-walk every iterative improvement in this category.

That's a lovely theory, but when it's being done by people like the above, then their attitude will be "Yeah, sure, whatever" and they will prefer playing with the shiny new toy to vague premonitions of societal something-or-other.

This tweet is a succinct summary:

Pre-2008: We’ll put the AI in a box and never let it out. Duh.

2008-2020: Unworkable! Yudkowsky broke out! AGI can convince any jail-keeper!

2021-2022: yo look i let it out lol

2023: Our Unboxing API extends shoggoth tentacles directly into your application [waitlist link]

It's clear at this point that no coherent civilizational plan will be followed to mitigate AI x-risk. Rather, the "plan" seems to be to move as fast as possible and hope we get lucky. Well, good luck everyone!