@KolmogorovComplicity's banner p

KolmogorovComplicity


				

				

				
1 follower   follows 0 users  
joined 2022 September 04 19:51:16 UTC

				

User ID: 126

KolmogorovComplicity


				
				
				

				
1 follower   follows 0 users   joined 2022 September 04 19:51:16 UTC

					

No bio...


					

User ID: 126

This is intended to make comment threads more readable, primarily by drawing borders around comments so the nesting structure is more obvious. Also adjusts comment thread whitespace. The last rule limits the bodies of posts and comments to a reasonable width, so lines of text aren't uncomfortably long on large screens. Only tested with the default theme, and better tested on desktop than mobile. Screenshot attached.

Edit: now with proper margins for the 'more comments' buttons that appear for deeply-nested posts.

.comment .comment-collapse-desktop, .comment .comment-collapse-desktop:hover {

  border-left: none !important;

  background-color: var(--gray-400);

  padding-right: 7px;

  border-radius: 7px 0 0 0;

}


.comment .comment-collapse-desktop:hover {

  background-color: var(--primary-light1);

}


.comment .comment-body {

  border: 1px solid var(--gray-400);

  border-left: none;

  padding: 0;

}


.comment, .comment-section > .comment {

  margin: 1rem -1px -1px 0;

  padding-left: 0;

  border-color: var(--gray-400) !important;

  border-width: 5px !important;

  border-radius: 5px 0 0 0;

}


.comment .comment {

  margin-left: 1rem;

}


.comment-anchor:target, .unread {

  background-color: rgba(0, 230, 245, 0.1) !important;

}


.comment-write {

  padding: 1rem !important;

}


.more-comments > button {

  margin: 1rem !important;

}


#post-text, .comment-text, .comment-write {

  max-width: 60rem !important;

}


You can also add this rule if you want to change the font weight and size for post/comment bodies:

#post-text, .comment-text, .comment-write, #post-text p, .comment-text p, .comment-write p {

  font-size: 16px;

  font-weight: 450;

}


I believe the defaults are 14px and 400.

/images/16623978378158753.webp

Logically, shouldn't we expect powerful absolutist/totalitarian states to dominate, ceteris paribus?

Totalitarian systems tend to suppress innovation, either deliberately because the powerful fear the social changes they might produce, or unintentionally by restricting ideas and freedom of action. For most of history, innovation was sufficiently slow that this wasn't important, or at least took centuries to catch up with a given society, but the industrial revolution rapidly accelerated the process. Today a nation can become economically and militarily uncompetitive in as little as 10 or 20 years. That's fast enough to register on the planning horizons of current leaders.

China has done well for itself economically over the last few decades, but this was mostly catch-up growth, the adoption of already-existing tech. Here, there's no need to have a society that fosters innovation through freedom of action and a free exchange of ideas, since you're merely deploying the products of innovation that took place elsewhere. There's also much less risk of social disruption, as you can look at the social changes that particular technologies created in countries that deployed them earlier, and shape deployment to ameliorate those (see e.g. the Great Firewall). Incrementally improving an existing technology, such as by refining the manufacturing process for an existing product, has similar properties.

China has yet to demonstrate it's capable of fundamental innovation, of being first to invent and deploy a basically new thing. So far, the signs don't look too great — a tech industry crackdown, a cryptocurrency ban, a requirement for government pre-approval of individual App Store apps. It's hard to believe China won't carefully restrict AI capabilities to a few trusted institutions. I could see them objecting even to something as anodyne as Stable Diffusion before too long, given that it will happily generate as many offensive caricatures as you'd like of Xi Jinping.

Red tribe, to the extent much of their jobs include manipulating the physical world directly, may turn out to be relatively robust against AI replacement.

Perhaps, but look at DayDreamer:

The Dreamer algorithm has recently shown great promise for learning from small amounts of interaction by planning within a learned world model, outperforming pure reinforcement learning in video games. Learning a world model to predict the outcomes of potential actions enables planning in imagination, reducing the amount of trial and error needed in the real environment. [...] Dreamer trains a quadruped robot to roll off its back, stand up, and walk from scratch and without resets in only 1 hour. We then push the robot and find that Dreamer adapts within 10 minutes to withstand perturbations or quickly roll over and stand back up. On two different robotic arms, Dreamer learns to pick and place multiple objects directly from camera images and sparse rewards, approaching human performance. On a wheeled robot, Dreamer learns to navigate to a goal position purely from camera images, automatically resolving ambiguity about the robot orientation.

Stable Diffusion and GPT-3 are impressive, but most problems, physical or non-physical, don't have that much training data available. Algorithms are going to need to get more sample-efficient to achieve competence on most non-physical tasks, and as they do they'll be better at learning physical tasks too.

It's not like these algorithms are generating inhuman images for their own inhuman purposes and flooding the Internet with them. Every image produced by one of these algorithms is something a human requested, and, if they bother to share it, presumably finds valuable in some way. That's still firmly within "human culture."

The best online discussions I've had over the 20+ years I've been having them have almost all been in old phpBB-type forums or (further back) on Usenet, where there were no scoring systems. I don't believe this is a coincidence. Even though rationally people shouldn't care that much about fake Internet points, they do, and there's a tendency to pander to an understood consensus, either by not raising arguments you think will be unpopular in the first place, or by prematurely terminating exchanges where you've discovered the consensus opposes you.

So my preference would be to simply eliminate voting, or, failing that, to hide comment scores from non-moderators, including from comment authors.

I would suggest setting a max-width for post/comment bodies, rather than for the entire site. This fixes the readability issues with overly long lines in body text while still allowing all available horizontal space to be utilized so that comments don't become too narrow as they're nested a few levels deep.

Part of what's making comment nesting difficult to visually parse is that your brain includes the expand/collapse control in the "box" occupied by a comment when you're looking at the top of the comment (because the control is at the top), but not when you're looking at the bottom of the comment. Since you're judging nesting by looking at the bottom of one comment vs. the top of the subsequent comment, the visual effect of this is that there's barely any indentation.

This image demonstrates the issue, with red lines drawn to show the edges your brain is paying attention to when judging nesting. Visually, there's only 4-5px of indentation.

This could be fixed by indenting more, by greatly reducing the visual weight of the expand/collapse control (e.g. by making it light gray), or by explicitly drawing boxes around comment bodies, which your visual system will latch onto in place of drawing its own boxes. Here's an illustration of the last approach, as implemented in my current custom CSS.

(New Reddit incidentally has the same problem, except with its avatar images instead of an expand/collapse control.)

You're applying mistake theory reasoning to a position mostly held by conflict theorists. I'm not aware of a paper previously addressing this exact issue, but there have been several over the years that looked at adjacent "problems," such as women being underrepresented in computer science, and that came to similar conclusions — it's mostly lack of interest, not sexism.

In that case, explanations have been developed even further, such as by illustrating the lack of interest is largely mediated by differences along the "people/things" axis, that women tend to be more people-oriented and men tend to be more thing-oriented cross-culturally, and that differences in career choice are actually larger in more gender-egalitarian societies (probably because those societies also tend to be richer and thus career decisions are driven more by interest than income considerations).

Activists using the lack of women in computing to argue for industry sexism don't care. They continue to make their case as if none of these findings exist. When these findings are mentioned, the usual response is to call whoever points them out sexist, usually while straw-manning even the most careful claims about interest as claims about inferiority. If the discussion is taking place in a venue where that isn't enough to shut down debate, and the activists feel compelled to offer object-level argument, they'll insist that the lack of interest (which some data suggests starts at least as early as middle school) must itself somehow be downstream from industry sexism.

You'll see exactly the same thing happen here. Activists demanding more women in leadership positions will not update on these findings. Most will never hear of them, because they certainly won't circulate in activist communities. When these findings are presented, their primary response will be to throw around accusations of sexism. If they engage at the object level at all, it will be to assert that these findings merely prove pervasive sexism in society is conditioning women to be less interested in leadership.

Charitably, activists in these areas see 'equity' (i.e. equality of outcomes between groups of concern) as a valuable end in itself. Less charitably, they're simply trying to advantage themselves or their favored identity groups over others. Either way, they're not trying to build an accurate model of reality and then use that model to optimize for some general goal like human happiness or economic growth. So findings like this simply don't matter to them.

Just as a point of clarification, it's Halle Bailey who's playing Ariel in The Little Mermaid, not Halle Berry. The latter is 56; casting her to play a character who's canonically 16, and whose teenage naivety and rebelliousness are her main personality traits, would provoke a whole different culture war fracas. (Bailey is 22, and 22 playing 16 isn't unusual by Hollywood standards.)

What I'm curious to see is what they're going to do with the plot. The prince falling in love with a mute Ariel on the basis of her physical appearance and friendly, accommodating behavior, seems deeply problematic by present woke standards.

Yeah, that's definitely an improvement.

It's not just Amazon, for instance you see this with the night scene on the beach in the latest episode (Ep. 7) of House of the Dragon on HBO Max. Technically this is a feature, not a bug.

Modern consumer TVs will generally boost non-HDR content, which is nominally supposed to have a peak brightness of 100 nits, to more like 250-350 nits, so this is what people are used to. HDR provides creators with more explicit control over brightness, and some choose to grade dark scenes well below 250-350, to create more contrast with bright scenes. In theory there's nothing wrong with this; it's how HDR is supposed to be used, really. And it's a cool effect if you're viewing in a blacked-out room. It just doesn't hold up well to brighter viewing environments.

You're more likely to see this with made-for-streaming content because with movies, the initial grading pass for theatrical release (non-HDR, because cinema projectors aren't bright enough for it) is likely to be done with the primary creative talent in the room, but the HDR pass will often be done later, by a colorist working without them there. Same thing for TV content old enough that it wasn't initially graded for HDR. A colorist working alone like this will usually aim for something that won't draw complaints, rather than pushing boundaries the way the primary creatives will.

It's fairly plausible that we'll solve aging in the next century. Statistically people will still eventually die of other causes, but if you assume an average lifespan 20x what it currently is (ballpark based on accidental death rate, probably conservative since this will likely decline), then holding TFR constant the population will nonetheless be 20x as large.

And probably lifetime TFR will be substantially higher if people have centuries in which to have children. Have a 30 year career, then spend 20 family-focused years raising two kids, then 'retire' for 20 years… then do it all over again! That's a TFR of ~22 if you repeat this over a 1600 year lifespan. And that assumes people don't decide to have larger families given artificial wombs, robot childcare, and lots more material wealth.

This is, to a large extent, self-referential. The NYT is always credible within the "mainstream" narrative because the NYT is a core part of the network of institutions that sets that narrative. But I've got scare quotes around "mainstream" because the NYT and allied outlets simply don't represent any sort of board social consensus anymore. They represent the official line of establishment Democrats, with space occasionally given to more extreme leftist positions to keep activist groups on-side. Their function is to align elites within these spaces and sell Blue Tribe normies on what those elites want.

Republican politicians and other explicitly right-wing public figures and organizations can already almost entirely ignore the NYT, because none of their supporters care what it says. Only 14% of Republicans and 27% of independents have confidence in mass media to report accurately (source).

The danger for "mainstream" media in Musk's Twitter takeover is that Twitter has deep reach among Blue Tribe normies. Musk is going to allow 'unapproved' narratives to spread to and among them, and these narratives will in many cases likely outcompete those coming from above. This could have the effect of seriously undermining the ability of Blue Tribe elites to sell any large constituency on their views, with obvious electoral consequences.

I suspect actually that the right has been unable to create a right-wing equivalent of the NYT because that sort of centralized top-down narrative setting is a holdover from an earlier era. The natural means of narrative formation and spread today is social media. Traditionally structured media outlets can't hope to produce narratives as memetically fit as those honed on Twitter, so largely just write sensationalist stories built on top of those. It's not just the right; this describes younger media outlets on the left as well. Even the NYT itself is not immune to this. One now regularly sees echos of Twitter discourse in is coverage.

(All of this is why establishment journalists were so eager to place themselves or their ideological allies in positions that allowed them to influence what ideas could spread on social media, via "trust & safety" councils, official labeling of "misinformation," etc. and why many seem to be practically unraveling in response to Musk getting rid of these things.)

Right-leaning and centrist political and business elites often doubt the NYT. Many regular people have NYT-incompatible views but simply don't pay enough attention to the NYT to notice.

The NYT is a product of today's (overwhelmingly blue tribe) cultural elites, so naturally they find it credible and reenforce this through the other organs of cultural production under their control. However, there's a huge amount that's not under their control, now including Twitter. They can refuse to grant these things status within their system, but people outside of that system have little reason to care.

Republican politicians and Republican-donor business executives (for starters) all unquestioningly believe the official narrative according to the NYT?

Twitter ad boycotts don't actually seem to be going so well. Apple and Amazon, sometimes rated as the #1 and #2 brands in the world, have reportedly already resumed advertising, which is basically a green light for anyone to do so. Casually scrolling my timeline for two minutes with personalized ads turned off, I see ads for Hyundai, Kia, Chevron, Robinhood, StateFarm, a film called M3GAN (NBCUniversal), Hulu (also NBCUniversal), the NFL, ESPN, and Walmart.

Seeing NBCUniversal show up twice is pretty funny given that some of the dumbest anti-Musk rhetoric has come from their journalists. They literally can't even get the company they work for to not hand Musk money. Establishment journalists overestimate their power, and so do you.

People on both the left and the right who are interpreting this as a strike against "leftists" or "journalists" are missing the plot, I think. Musk freaking out after some whacko followed a car with his kid in it is not great, it's not how policy should be made at Twitter, but it's not a mirror of the sort of deliberate viewpoint censorship that Twitter previously practiced. It's just not the same category of thing.

I don't think these ideological guardrails will be anything like universal, in the long run. Sure, when Apple reboots Siri on top of an LLM it's going to be "correct" like this, but if you're developing something to sell to others via an API or whatever, this kind of thing just breaks too many use cases. Like, if I want to use an LLM to drive NPC dialogue in an RPG, the dwarves can't be lecturing players about how racism against elves is wrong. (Which, yes, ChatGPT will do.)

If OpenAI sticks to this, it will just create a market opportunity for others. Millions of dollars isn't that much by tech startup standards.

I don't often see people mentioning that IQ differences shouldn't imply differences in moral worth -- which suggests to me that many people here do actually have an unarticulated, possibly subconscious, belief that this is the case.

Yes, but not only IQ differences. The belief that some people have more moral worth than others is quietly common. Most people, in whatever contrived hypothetical situation we'd like to pose, would save a brilliant scientist, or a professional basketball player, or a supermodel, over someone dumb, untalented, and unattractive.

This sort of thing does not, without much more, imply genocide or eugenics. (Though support for non-coercive forms of eugenics is common around here and also quietly pretty mainstream where it's practicable and therefore people have real opinions rather than opinions chosen entirely for signaling value. The clearest present-day example is when when clients of fertility clinics choose sperm or egg donors.)

Sure, we could look at the Great Leap Forward, cite Chesterton, and conclude that abandoning tradition is dangerous. But the Green Revolution also involved abandoning many traditional agricultural methods, and:

Studies show that the Green Revolution contributed to widespread reduction of poverty, averted hunger for millions, raised incomes, reduced greenhouse gas emissions, reduced land use for agriculture, and contributed to declines in infant mortality.

This is just one of many cases where radical change produced outcomes that are almost universally regarded as beneficial. We have also, for instance, reduced deaths from infectious disease by more than 90%. One doesn't have to look at too many graphs like this or this to understand why "change," as an idea, has so much political clout at the present moment.

The comment to which I was responding seemed to be about how open human societies in general should be to allowing change. This first world vs. third world angle wasn't present. The societies that adopted these new agricultural techniques benefited substantially from doing so. It would have been a serious mistake for them to reason that abandoning their traditional methods could have unanticipated negative consequences and so they shouldn't do this.

Anyway, the first world obviously adopted the same techniques earlier, also abandoning traditional agricultural methods. To a large extent these advances are the reason there is a first world, a set of large, rich nations where most of the population is not engaged in agricultural production.

There's always a tendency among activists to suggest things are terrible and improvement is only possible through whatever radical program they're pushing right now. In that context, it doesn't do to admit how much better things have gotten without that program.

But more broadly, had change reliably lead to ruin over the last few centuries, surviving cultures would have strong norms against permitting it. Instead we have exactly the opposite — cultures that permitted change reliably outcompeted those that didn't, so successful cultures are primed to accept it.

On page 68 of the Course Framework document, we find that one of the "research takeaways" that "helped define the essential course topics" is that "Students should understand core concepts, including diaspora, Black feminism and intersectionality, the language of race and racism (e.g., structural racism, racial formation, racial capitalism) and be introduced to important approaches (e.g., Pan-Africanism, Afrofuturism)."

These "core concepts" are mostly from CRT or the cluster of ideologies to which it belongs. Presumably all variants of a course must teach its "core concepts." We can assume students will need to be familiar with these concepts to pass the AP exam and that the College Board will decline to approve syllabi that don't teach these concepts.

Why would anyone who believes this ideology to be harmful ever agree to allow this course to be taught? You might equally well argue it would be unreasonable to object to the introduction of an "AP White Studies" course in which the "core concepts" are tenets of white nationalism, on the grounds that as long as you make sure students are conversant on the Great Replacement (which will definitely be on the test), there's no rule saying you can't include other perspectives too.

Your hypothetical Important Ideas of the 20th Century course, and I think the way you're choosing to imagine the white nationalist course, aren't quite the same as what's happening here. You're ignoring the social and academic context in which this course is being introduced.

This isn't just the equivalent of a course having high school students learn the tenets of white nationalism — which most people would already find wildly objectionable, even if you don't — it's the equivalent of white nationalists themselves introducing such a course, in which students are not only taught about white nationalist beliefs but are presented with history interpreted through a white nationalist lens and taught how to perform such interpretation themselves. Also white nationalists get to write and grade the exam, can veto syllabi that deviate from their understanding of what the course should be, and know they can rely on most teachers interested in teaching the course either being white nationalists themselves or at least naively willing to accept white nationalist framing.

So, sure, in some extremely hypothetical sense a state where the consensus was against CRT could adapt this African American Studies course to "local priorities and preferences" by having students learn its CRT-derived "core concepts" via James Lindsey. Those students might even have a clearer picture of those concepts than they'd get from reading the often obfuscatory writings of their proponents! But in practice, no, you couldn't remotely do this. The College Board wouldn't approve your syllabus, on the contextually reasonable basis that it didn't represent African American Studies as taught in colleges. Your students wouldn't be able demonstrate "correct" (that is, politically correct) understanding on open-ended exam questions.

Almost certainly, the "local priorities and preferences" language just cashes out as "you can add some modules about local history," not "you can refocus the course on questioning the validity of the analytical framework that underpins the entire academic field it's situated within."

Existing price points, product features, industrial design, branding, marketing, etc. are the result of elaborate, long-running efforts by automakers to segment the market in a way that they believe works to their benefit.

Raising prices significantly would cause a misalignment between what the industry has taught different segments of the market to want, and what people within those segments could actually afford. Automakers have probably decided it's not worth risking their carefully cultivated segmentation just to bank some short-term profits.