@KolmogorovComplicity's banner p

KolmogorovComplicity


				

				

				
2 followers   follows 0 users  
joined 2022 September 04 19:51:16 UTC

				

User ID: 126

KolmogorovComplicity


				
				
				

				
2 followers   follows 0 users   joined 2022 September 04 19:51:16 UTC

					

No bio...


					

User ID: 126

The mistake Todd makes here is that he seems to recognize the characteristically Trumpian mode of lying — repetition of crude falsities — but not the mode preferred by the progressive establishment — capturing sense-making institutions and turning them toward promoting ideologically-driven narratives. The latter predates Trump, is far more consequential, and is propagated primarily by the likes of the NYT and CNN.

There are services that help automate treasury management for smaller companies now, like Vesto.

Until last year T-Bills were paying ~nothing, and it had been that way since 2008, an eternity in the startup world. There was no direct financial incentive to do anything more complicated than park your money in a checking account. Sure, ideally everyone should have been actively managing things to hedge against bank failure, but startups have a zillion things to worry about. SVB's pitch was basically that they were experts on startup finance and would relieve you of having to worry about this yourself. The social proof of these claims was impeccable.

So, yes, many startups screwed up. It turns out that safeguarding $20M isn't entirely trivial. But it's a very predictable sort of screwup. There wasn't really anyone within their world telling them this, it wasn't part of the culture, nobody knew anyone who had been burned by it.

And, well, maybe it should be trivial to safeguard $20M? "You have to actively manage your money or there's a small chance it might disappear" is actually a pretty undesirable property for a banking system to have. The fact that it's true in the first place is a consequence of an interlocking set of government policies — the Fed doesn't allow "narrow banks" (banks that just hold your money in their Fed master accounts rather than doing anything complicated with it) and offers no central bank digital currency (so the only way to hold cash that's a direct liability of the government is to hold actual physical bills). Meanwhile the FDIC only guarantees coverage of up to $250K, a trivial amount by the standards of a business.

The net result of these policies is that the government is effectively saying "If you want to hold dollars in a practical liquid form you have to hold them in a commercial bank. We require that bank to engage in activities that carry some level of risk. We'll try to regulate that bank to make sure it doesn't blow up, but if we fail, that's your problem."

"WTF?" is a reasonable response to this state of affairs. If these companies had had the option to put their money into a narrow bank or hold it as a direct liability of the government, but had nonetheless chosen to trust it to a private bank because they were chasing higher returns, I'd have zero sympathy for them. But our system declines to make those safer options available.

DEI nonsense probably had something to do with this, but mostly it looks like plain old "innovator's dilemma" stuff. Fear of self-disruption.

Google makes most of its money from search. Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).

Conversational AI potentially pulls users away from search, and it's not clear it really has a direct equivalent of that property. Sure, people might use conversational AI to decide what products to buy, and it should be able to detect purchase intent, but exactly what do you do with that, and how effective is it?

It's not hard to generate high-level ideas here, but none are proven. Search and conversation have different semantics. User expectations will differ. "Let advertisers pay to have the AI recommend their products over others," for instance, might not be tolerated by users, or might perform worse than search ads do for some reason. I don't know. Nobody does. Product-market fit is non-trivial (the product here being the ads).

On top of this, LLMs require a lot more compute per interaction than search.

So in pushing conversational AI, Google would have been risking a proven, massively profitable product in order bring something to market that might make less money and cost more to run.

Now, this was probably the right choice. You usually should self-disrupt, because of exactly what's happened here — failing to do so won't actually keep the disruptive product off the market, it'll just let someone else get there first. But it's really, really hard in most corporate cultures to actually pull the trigger on this.

Fortunately for Google, they've split the difference here. While they didn't ship a conversational AI product, they did develop the tech, so they can ship a product fairly quickly. They now have to fend off competition that might not even exist if they'd shipped 18 months ago, but they're in a fairly strong position to do so. Assuming, of course, the same incentives don't also cause them to slow-walk every iterative improvement in this category.

Or we could imagine the opposite. Personal AIs that know us intimately might be able to find us perfect friends and partners. Add in augmented reality tech that eliminates distance as a barrier to any form of socialization that doesn't require physical contact, and perhaps we're about to completely wipe out atomization/loneliness and save modernity from itself.

Really, nobody has any idea where this is going. The only safe bet is that it's going to be big. A service enabling people to share 140 character text snippets was sufficient to meaningfully shift politics and culture, and that's peanuts to this, probably even if the current spring ends short of AGI.

Why do you think that? Aren’t you jumping the gun a bit?

Carmack pointed out in a recent interview:

If you take your entire DNA, it’s less than a gigabyte of information. So even your entire human body is not all that much in the instructions, and the brain is this tiny slice of it —like 40 megabytes, and it’s not tightly coded. So, we have our existence proof of humanity: What makes our brain, what makes our intelligence, is not all that much code.

On this basis he believes AGI will be implemented in "a few tens of thousands of lines of code," ~0.1% of the code in a modern web browser.

Pure LLMs probably won't get there, but LLMs are the first systems that appear to represent concepts and the relationships between them in enough depth to be able to perform commonsense reasoning. This is the critical human ability that AI research has spent more than half a century chasing, with little previous success.

Take an architecture capable of commonsense reasoning, figure out how to make it multi-modal, feed it all the text/video/images/etc. you can get your hands on, then set it up as a supervising/coordinating process over a bunch of other tools that mostly already exist — a search engine, a Python interpreter, APIs for working with structured data (weather, calendars, your company's sales records), maybe some sort of scratchpad that lets it "take notes" and refer back to them. For added bonus points you can make it capable of learning in production, but you can likely build something with world-changing abilities without this.

While it's possible there are still "unknown unknowns" in the way, this is by far the clearest path to AGI we've ever been able to see.

The best UI for an AI agent is likely to be a well-documented public API, which in theory will allow for much more flexibility in terms of how users interact with software. In the long run, the model could look something like your AI agent generating custom interface on the fly, to your specifications, tailored for whatever you're doing at the moment. Could be a much better situation for power users than the current trend toward designing UI by A/B testing what will get users to click a particular button 3% more often.

On page 68 of the Course Framework document, we find that one of the "research takeaways" that "helped define the essential course topics" is that "Students should understand core concepts, including diaspora, Black feminism and intersectionality, the language of race and racism (e.g., structural racism, racial formation, racial capitalism) and be introduced to important approaches (e.g., Pan-Africanism, Afrofuturism)."

These "core concepts" are mostly from CRT or the cluster of ideologies to which it belongs. Presumably all variants of a course must teach its "core concepts." We can assume students will need to be familiar with these concepts to pass the AP exam and that the College Board will decline to approve syllabi that don't teach these concepts.

Why would anyone who believes this ideology to be harmful ever agree to allow this course to be taught? You might equally well argue it would be unreasonable to object to the introduction of an "AP White Studies" course in which the "core concepts" are tenets of white nationalism, on the grounds that as long as you make sure students are conversant on the Great Replacement (which will definitely be on the test), there's no rule saying you can't include other perspectives too.

There are serious efforts to get cutting edge domestic chip production up and running in the US, the EU, Japan, and South Korea. I'm not too optimistic about the US (cost disease, overregulation), but it'll likely happen in at least one of those countries in the next 3-5 years, and it's all the same to US multinationals. China may be willing to wait for this precisely so the US is less motivated to defend Taiwan.

Separately, I think we're rather clearly entering a period of disruption with respect to military tech and tactics. Why fight a 20th century war against the 20th century's most powerful military, if you can wait a bit and, I don't know, sneak a million drones into the skies over Taipei from submersible launch platforms?

Commercial banks could offer higher interest rates on deposits, lend out their own capital, or issue bonds. If this didn't provide sufficient funding for whatever amount of lending the government wanted to see, the government itself could loan money to banks to re-lend.

Really though, the easiest patch to the system would just be for FDIC insurance to (officially) cover unlimited balances, or at least scale high enough that only the largest organizations had to worry about it. It makes no sense to require millions of entities (if you include individuals of moderate net worth) to constantly juggle funds to guard against a very small chance of a catastrophic outcome that most of them aren't well positioned to evaluate the probability of. That's exactly the sort of risk insurance is for.

If the concern is that this will create moral hazard because banks that take more risks will be able to pay higher interest rates and fully-insured depositors will have no reason to avoid them, the solution is just for regulators to limit depository institutions to only taking on risks the government is comfortable insuring against. Individuals should be allowed to take on risk to chase returns, but there's no compelling reason to offer this sort of exposure through deposit accounts in particular. Doing so runs contrary to the way most people mentally model them or wish to use them.

I don't often see people mentioning that IQ differences shouldn't imply differences in moral worth -- which suggests to me that many people here do actually have an unarticulated, possibly subconscious, belief that this is the case.

Yes, but not only IQ differences. The belief that some people have more moral worth than others is quietly common. Most people, in whatever contrived hypothetical situation we'd like to pose, would save a brilliant scientist, or a professional basketball player, or a supermodel, over someone dumb, untalented, and unattractive.

This sort of thing does not, without much more, imply genocide or eugenics. (Though support for non-coercive forms of eugenics is common around here and also quietly pretty mainstream where it's practicable and therefore people have real opinions rather than opinions chosen entirely for signaling value. The clearest present-day example is when when clients of fertility clinics choose sperm or egg donors.)

Here are some arguments I've found somewhat effective on normies:

Clearly draw the distinction between consumption and capital allocation. Capitalism isn't about who gets to live a lavish lifestyle — in practice, higher-ups in communist countries often get to do this, and you can, in principle, limit it as much as you like under capitalism with consumption or luxury goods taxes. Capitalism is really about who gets to decide where to invest resources to maximize growth. Most people recognize that politicians and government bureaucrats probably aren't going to be the best at deciding e.g. which new technologies to invest in.

Point out that the ultra-rich, who they've probably been told are hoarding wealth, mostly just own shares of companies. Bezos or Musk aren't sitting on warehouses full of food that could feed the hungry or vast portfolios of real estate that could house the homeless. They've got Amazon and Tesla shares. Those companies themselves aren't sitting on very much physical wealth either; most of their value comes from the fact that people believe they'll make money in the future. So even if you liquidated their assets, there would be little benefit for the have-nots.

Compare the scale of billionaire wealth with government resources, e.g. point out that the federal government spends the equivalent of Musk's entire fortune every 12 days or so. I find that this helps dispel the idea that famous (or infamous) capitalists really have 'too much' power. Use this to make the point that taking wealth out of the hands of capitalists wouldn't actually serve to deconcentrate power, but to further concentrate it.

Point out that US government spending on education and healthcare often already exceeds that of European social democracies in absolute terms; emphasize that the reason we don't have better schools and free healthcare is because of ineffective government spending, not private wealth hoarding. Ask if it really makes sense to let the political mechanisms that have produced these inefficiencies control of even more of the economy.

Explain that capitalism is just a scaled version of a natural sort of voluntary exchange. If I make birdhouses in my garage and trade them to my neighbor for tomatoes they grow in their garden, we're technically doing capitalism. A communist system has to come in at some point — maybe, in practice, not at the point where I'm exchanging a handful of birdhouses a year, but certainly at some point if I start making and exchanging a lot of them — and tell me I'm not allowed to do this. The state is already supplying the citizenry with the quantity and quality of birdhouses and tomatoes it deems necessary, and I'm undermining the system. Most people will intuitively grasp that there's something screwy about this, that I'm not actually harming anyone by making and exchanging birdhouses, and that the state really has no business telling me I can't.

Point out that capitalism is, in fact, actually doing a very good job of delivering the kind of outcomes they probably desire from communism. For instance it has substantially reduced working hours in rich countries, has made the poor and the middle class in the US vastly better off (and this didn't stop in the '70s as they've probably been told, per the last chart here), and has lifted billions of people out of poverty globally over the last few decades. If they invoke environmental concerns, point out that the USSR actually had a fairly atrocious environmental record, while almost all new electricity generation in the US is already carbon-free.

That's a lovely theory, but when it's being done by people like the above, then their attitude will be "Yeah, sure, whatever" and they will prefer playing with the shiny new toy to vague premonitions of societal something-or-other.

This tweet is a succinct summary:

Pre-2008: We’ll put the AI in a box and never let it out. Duh.

2008-2020: Unworkable! Yudkowsky broke out! AGI can convince any jail-keeper!

2021-2022: yo look i let it out lol

2023: Our Unboxing API extends shoggoth tentacles directly into your application [waitlist link]

It's clear at this point that no coherent civilizational plan will be followed to mitigate AI x-risk. Rather, the "plan" seems to be to move as fast as possible and hope we get lucky. Well, good luck everyone!

It seems to me there's a non-trivial distinction between shutting down a network to try to prevent influence and data gathering by a semi-hostile foreign government, and shutting down a network to try to silence domestic political speech.

I don't think you could openly do the latter in the US. Though if Harris is elected, I won't be shocked if Musk is indicted on some tenuous securities charge to try to force him out of his companies in favor of more accommodating leadership.

One of the Satanic Temple's causes is separation of church and state, and I expect part of what they're trying to do here is cause governments to decide it's too much trouble to allow holiday displays on public property at all. Vandalism of their displays, or Christians also using such displays in deliberately inflammatory ways, both make it more likely they'll get that outcome.

Meanwhile, I don't think the ideological faction represented by the Satanic Temple would actually care very much about the content of your proposed displays. If anyone did dramatically tear such a display down, it would almost certainly be some progressive activist, a distinctly different faction.

Manual labor jobs are more resistant to GPT-4 than email jobs are, but they're not meaningfully resistant to actual AGI. A lot of the incapacity of our current robotics tech is on the software side, which AGI definitionally fixes. Advanced robots are presently expensive primarily because they're low-volume specialty items, which won't be true if smarter software suddenly allows them to perform far more tasks. A few years later you'll have robots building more robots with no human labor input, an exponential process which leads to hilarious outcomes like economic output doubling every month or two.

This isn't just a matter of tweaking some tax policies. Our reference class for something like AGI should be more like the transition into industrial capitalism, except much faster, and on a much larger absolute scale. Humans may survive; I'm not entirely persuaded by arguments to the contrary. Existing forms of social organization almost certainly won't. Thinking we'll fix this up with UBI or public works employment or even Fully Automated Luxury Communism is like a feudal king thinking he'll deal with industrial capitalism by treating factories like farmland and handing them out to loyal vassals.

But if you go hiking occasionally the AI can sell you tents and backpacks and cabin rentals.

Really, outcomes in most markets aren't nearly as perverse as what we see with Tinder. Chrome, for instance, doesn't intentionally fail to load web pages so that Google can sell me premium subscriptions and boosters to get them to load. Unlike Tinder, Chrome is monetized in a way that doesn't provide an incentive for its developer to intentionally thwart me in my attempts to use it for its ostensible purpose, and there's enough competition that if Google tried this people would stop using it.

Sure, we could look at the Great Leap Forward, cite Chesterton, and conclude that abandoning tradition is dangerous. But the Green Revolution also involved abandoning many traditional agricultural methods, and:

Studies show that the Green Revolution contributed to widespread reduction of poverty, averted hunger for millions, raised incomes, reduced greenhouse gas emissions, reduced land use for agriculture, and contributed to declines in infant mortality.

This is just one of many cases where radical change produced outcomes that are almost universally regarded as beneficial. We have also, for instance, reduced deaths from infectious disease by more than 90%. One doesn't have to look at too many graphs like this or this to understand why "change," as an idea, has so much political clout at the present moment.

Of the three things banned by the Texas bill, there’s no issue at all with two. DEI departments, and compelling (profession of) belief under implicit threat of failing a class, are not forms of free speech. They’re means of enforcing ideological conformity through institutional power. They have as much right to exist under the principles of free expression as Orwell's Ministry of Truth. If woke professors or laid off DEI employees want to promote their views by, say, handing out fliers in the hallways, that's fine.

Banning tenure is a little more questionable, but even here it’s not so clear where advocates of free expression should land. This isn’t a straightforward case of tenure being banned so that the establishment can censor antiestablishment views. It's being banned, rather, by one group with institutional power (political leaders) to try to stop another group with institutional power (professors) from indoctrinating students into the dominant elite ideology. This is historically unusual because, of course, in most times and places political leaders support the dominant elite ideology.

Yes. Because of, I'm pretty sure, parking.

Once a system gets bad enough, everyone with resources or agency stops using it, and then stops caring about it, leaving nobody who can effectively advocate for improvement. But, of course, this can only play out if there's a viable alternative. In most cities, cars are that alternative, even despite traffic. People are evidently willing to sit in horrible stop-and-go traffic in order to avoid using even mildly unpleasant mass transit.

What they're not willing to do, apparently, is sit in horrible stop-and-go traffic and then have to spend 45 minutes looking for an on-street parking space that might end up being half a mile from their destination. That's the situation in NYC, which, unusually for the US, has no parking space minimums for businesses or residences and so effectively has zero free parking lots. If you want to practically substitute car travel for subway travel in NYC, you need to take Uber everywhere or use paid lots. Either option is sufficiently expensive (easily upwards of $10K/year) that even most of the upper middle class opts for the subway.

It's worth keeping an eye on this, because self-driving cars could completely disrupt it, either by dropping taxi prices 50% or more or by allowing cars to drop off their owners and then go find parking on their own.

both will stay incredibly low-status.

The thing is, there's a whole framework in place now for fighting this. Being gay used to be incredibly low-status. Being trans used to be incredibly low-status. Poly, kink, asexuality, etc. The dominant elite culture now says you're required to regard these as neutral at worst, and ideally as brave examples of self-actualization.

The robosexuals are absolutely going to try to claim a place within this framework and demand that people respect their preferences. Elite sexual morality has, at least formally, jettisoned every precept except consent, and there's not much of an argument against this on that basis.

Your hypothetical Important Ideas of the 20th Century course, and I think the way you're choosing to imagine the white nationalist course, aren't quite the same as what's happening here. You're ignoring the social and academic context in which this course is being introduced.

This isn't just the equivalent of a course having high school students learn the tenets of white nationalism — which most people would already find wildly objectionable, even if you don't — it's the equivalent of white nationalists themselves introducing such a course, in which students are not only taught about white nationalist beliefs but are presented with history interpreted through a white nationalist lens and taught how to perform such interpretation themselves. Also white nationalists get to write and grade the exam, can veto syllabi that deviate from their understanding of what the course should be, and know they can rely on most teachers interested in teaching the course either being white nationalists themselves or at least naively willing to accept white nationalist framing.

So, sure, in some extremely hypothetical sense a state where the consensus was against CRT could adapt this African American Studies course to "local priorities and preferences" by having students learn its CRT-derived "core concepts" via James Lindsey. Those students might even have a clearer picture of those concepts than they'd get from reading the often obfuscatory writings of their proponents! But in practice, no, you couldn't remotely do this. The College Board wouldn't approve your syllabus, on the contextually reasonable basis that it didn't represent African American Studies as taught in colleges. Your students wouldn't be able demonstrate "correct" (that is, politically correct) understanding on open-ended exam questions.

Almost certainly, the "local priorities and preferences" language just cashes out as "you can add some modules about local history," not "you can refocus the course on questioning the validity of the analytical framework that underpins the entire academic field it's situated within."

I don't think these ideological guardrails will be anything like universal, in the long run. Sure, when Apple reboots Siri on top of an LLM it's going to be "correct" like this, but if you're developing something to sell to others via an API or whatever, this kind of thing just breaks too many use cases. Like, if I want to use an LLM to drive NPC dialogue in an RPG, the dwarves can't be lecturing players about how racism against elves is wrong. (Which, yes, ChatGPT will do.)

If OpenAI sticks to this, it will just create a market opportunity for others. Millions of dollars isn't that much by tech startup standards.

Just as a point of clarification, it's Halle Bailey who's playing Ariel in The Little Mermaid, not Halle Berry. The latter is 56; casting her to play a character who's canonically 16, and whose teenage naivety and rebelliousness are her main personality traits, would provoke a whole different culture war fracas. (Bailey is 22, and 22 playing 16 isn't unusual by Hollywood standards.)

What I'm curious to see is what they're going to do with the plot. The prince falling in love with a mute Ariel on the basis of her physical appearance and friendly, accommodating behavior, seems deeply problematic by present woke standards.

Part of what's making comment nesting difficult to visually parse is that your brain includes the expand/collapse control in the "box" occupied by a comment when you're looking at the top of the comment (because the control is at the top), but not when you're looking at the bottom of the comment. Since you're judging nesting by looking at the bottom of one comment vs. the top of the subsequent comment, the visual effect of this is that there's barely any indentation.

This image demonstrates the issue, with red lines drawn to show the edges your brain is paying attention to when judging nesting. Visually, there's only 4-5px of indentation.

This could be fixed by indenting more, by greatly reducing the visual weight of the expand/collapse control (e.g. by making it light gray), or by explicitly drawing boxes around comment bodies, which your visual system will latch onto in place of drawing its own boxes. Here's an illustration of the last approach, as implemented in my current custom CSS.

(New Reddit incidentally has the same problem, except with its avatar images instead of an expand/collapse control.)

This is intended to make comment threads more readable, primarily by drawing borders around comments so the nesting structure is more obvious. Also adjusts comment thread whitespace. The last rule limits the bodies of posts and comments to a reasonable width, so lines of text aren't uncomfortably long on large screens. Only tested with the default theme, and better tested on desktop than mobile. Screenshot attached.

Edit: now with proper margins for the 'more comments' buttons that appear for deeply-nested posts.

.comment .comment-collapse-desktop, .comment .comment-collapse-desktop:hover {

  border-left: none !important;

  background-color: var(--gray-400);

  padding-right: 7px;

  border-radius: 7px 0 0 0;

}


.comment .comment-collapse-desktop:hover {

  background-color: var(--primary-light1);

}


.comment .comment-body {

  border: 1px solid var(--gray-400);

  border-left: none;

  padding: 0;

}


.comment, .comment-section > .comment {

  margin: 1rem -1px -1px 0;

  padding-left: 0;

  border-color: var(--gray-400) !important;

  border-width: 5px !important;

  border-radius: 5px 0 0 0;

}


.comment .comment {

  margin-left: 1rem;

}


.comment-anchor:target, .unread {

  background-color: rgba(0, 230, 245, 0.1) !important;

}


.comment-write {

  padding: 1rem !important;

}


.more-comments > button {

  margin: 1rem !important;

}


#post-text, .comment-text, .comment-write {

  max-width: 60rem !important;

}


You can also add this rule if you want to change the font weight and size for post/comment bodies:

#post-text, .comment-text, .comment-write, #post-text p, .comment-text p, .comment-write p {

  font-size: 16px;

  font-weight: 450;

}


I believe the defaults are 14px and 400.

/images/16623978378158753.webp