@KolmogorovComplicity's banner p

KolmogorovComplicity


				

				

				
1 follower   follows 0 users  
joined 2022 September 04 19:51:16 UTC

				

User ID: 126

KolmogorovComplicity


				
				
				

				
1 follower   follows 0 users   joined 2022 September 04 19:51:16 UTC

					

No bio...


					

User ID: 126

There's always a tendency among activists to suggest things are terrible and improvement is only possible through whatever radical program they're pushing right now. In that context, it doesn't do to admit how much better things have gotten without that program.

But more broadly, had change reliably lead to ruin over the last few centuries, surviving cultures would have strong norms against permitting it. Instead we have exactly the opposite — cultures that permitted change reliably outcompeted those that didn't, so successful cultures are primed to accept it.

It's fairly plausible that we'll solve aging in the next century. Statistically people will still eventually die of other causes, but if you assume an average lifespan 20x what it currently is (ballpark based on accidental death rate, probably conservative since this will likely decline), then holding TFR constant the population will nonetheless be 20x as large.

And probably lifetime TFR will be substantially higher if people have centuries in which to have children. Have a 30 year career, then spend 20 family-focused years raising two kids, then 'retire' for 20 years… then do it all over again! That's a TFR of ~22 if you repeat this over a 1600 year lifespan. And that assumes people don't decide to have larger families given artificial wombs, robot childcare, and lots more material wealth.

I would suggest setting a max-width for post/comment bodies, rather than for the entire site. This fixes the readability issues with overly long lines in body text while still allowing all available horizontal space to be utilized so that comments don't become too narrow as they're nested a few levels deep.

Probably for reasons similar to yours: status tends to play a somewhat bigger role in women's mate choices than in men's, and synths will always be very low status.

Hmm. What if there are designer models of synth that you have to know the right people to get? Would a women conceptualize such a synth as high status in its own right, or merely as a reflection of her own status? Maybe if the designer is a high status man and hand-picks which women can have the synths he designs, some of his status transfers to those synths?

There are some weird, unexplored corners to this issue.

Okay, now I have to try some of the prompts from the "Trurl's Electronic Bard" story in Lem's The Cyberiad out on Bing. Well, from the English translation.

Prompt:

Compose a poem—a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter s!

Bing response:

Shears snip swiftly, slicing strands

Salon smells sour, stings senses

Stylist smiles smugly, says "surprise!"

Shocked, she shrieks, shuns scissors

Sorrowful, she sobs, seeks solace

Someday, she'll sue, secure satisfaction

Wow. That doesn't follow every instruction and it doesn't really hold a candle to what the machine writes in the story:

Seduced, shaggy Samson snored.

She scissored short. Sorely shorn,

Soon shackled slave, Samson sighed,

Silently scheming,

Sightlessly seeking

Some savage, spectacular suicide.

But it's considerably more clever than I was expecting or, I think, than what the average human could write on short notice. Fitting any coherent narrative into "six lines, every word beginning with the letter s" is pretty tricky already, and on top of that it checks off the haircut, the treachery, and the retribution.

Those responses would qualify as native ads, for which FTC guidelines require "clear and conspicuous disclosures," that must be "as close as possible to the native ads to which they relate."

So users are going be aware the recommendations are skewed. Unlike with search, where each result is discrete and you can easily tell which are ads and ignore them, bias embedded in a conversational narrative won't be so easy to filter out, so people might find this more objectionable.

Also, LLMs sometimes just make stuff up. This is tolerable, if far from ideal, in a consumer information retrieval product. But if you have your LLM produce something that's legally considered an ad, anything it makes up now constitutes false and misleading advertising, and is legally actionable.

The safer approach is to show relevant AdWords-like ads, written by humans. Stick them into the conversational stream but make them visually distinct from conversational responses and clearly label them as ads. The issue with this, however, is that these are now a lot more like display ads than search ads, which implies worse performance.

The comment to which I was responding seemed to be about how open human societies in general should be to allowing change. This first world vs. third world angle wasn't present. The societies that adopted these new agricultural techniques benefited substantially from doing so. It would have been a serious mistake for them to reason that abandoning their traditional methods could have unanticipated negative consequences and so they shouldn't do this.

Anyway, the first world obviously adopted the same techniques earlier, also abandoning traditional agricultural methods. To a large extent these advances are the reason there is a first world, a set of large, rich nations where most of the population is not engaged in agricultural production.

Right-leaning and centrist political and business elites often doubt the NYT. Many regular people have NYT-incompatible views but simply don't pay enough attention to the NYT to notice.

The NYT is a product of today's (overwhelmingly blue tribe) cultural elites, so naturally they find it credible and reenforce this through the other organs of cultural production under their control. However, there's a huge amount that's not under their control, now including Twitter. They can refuse to grant these things status within their system, but people outside of that system have little reason to care.

Yeah, that's definitely an improvement.

The best online discussions I've had over the 20+ years I've been having them have almost all been in old phpBB-type forums or (further back) on Usenet, where there were no scoring systems. I don't believe this is a coincidence. Even though rationally people shouldn't care that much about fake Internet points, they do, and there's a tendency to pander to an understood consensus, either by not raising arguments you think will be unpopular in the first place, or by prematurely terminating exchanges where you've discovered the consensus opposes you.

So my preference would be to simply eliminate voting, or, failing that, to hide comment scores from non-moderators, including from comment authors.

I've fixed the backup issue and set up better monitoring so it will yell at me if it fails again.

Important backups should also send notifications on success. Notification only on failure risks a scenario where both the backup and the notifications fail.

To be even safer, the script that sends the success notification should pull some independent confirmation the backup actually occurred, like the output of ls -l on the directory the database dumps are going to, and should include this in the notification text. Without this, a 'success' email only technically means that a particular point in a script was reached, not that a backup happened.

Republican politicians and Republican-donor business executives (for starters) all unquestioningly believe the official narrative according to the NYT?

Twitter ad boycotts don't actually seem to be going so well. Apple and Amazon, sometimes rated as the #1 and #2 brands in the world, have reportedly already resumed advertising, which is basically a green light for anyone to do so. Casually scrolling my timeline for two minutes with personalized ads turned off, I see ads for Hyundai, Kia, Chevron, Robinhood, StateFarm, a film called M3GAN (NBCUniversal), Hulu (also NBCUniversal), the NFL, ESPN, and Walmart.

Seeing NBCUniversal show up twice is pretty funny given that some of the dumbest anti-Musk rhetoric has come from their journalists. They literally can't even get the company they work for to not hand Musk money. Establishment journalists overestimate their power, and so do you.

If I wanted to see memes of aichads owning artcels, where would I go? It’s really important for my mental health.

Isn't this one of those "I don't think about you at all" situations? There are many communities producing and sharing AI art without a care in the world for the people who are angry about it.

To feel magnetic lines as delicately as I can feel a breath disturb the little hairs on my arms.

This one can (sort of) be arranged:

Magnetic implant is an experimental procedure in which small, powerful magnets (such as neodymium) are inserted beneath the skin, often in the tips of fingers. [...] The magnet pushes against magnetic fields produced by electronic devices in the surrounding area, pushing against the nerves and giving a "sixth sense" of magnetic vision.

Part of what's making comment nesting difficult to visually parse is that your brain includes the expand/collapse control in the "box" occupied by a comment when you're looking at the top of the comment (because the control is at the top), but not when you're looking at the bottom of the comment. Since you're judging nesting by looking at the bottom of one comment vs. the top of the subsequent comment, the visual effect of this is that there's barely any indentation.

This image demonstrates the issue, with red lines drawn to show the edges your brain is paying attention to when judging nesting. Visually, there's only 4-5px of indentation.

This could be fixed by indenting more, by greatly reducing the visual weight of the expand/collapse control (e.g. by making it light gray), or by explicitly drawing boxes around comment bodies, which your visual system will latch onto in place of drawing its own boxes. Here's an illustration of the last approach, as implemented in my current custom CSS.

(New Reddit incidentally has the same problem, except with its avatar images instead of an expand/collapse control.)

Red tribe, to the extent much of their jobs include manipulating the physical world directly, may turn out to be relatively robust against AI replacement.

Perhaps, but look at DayDreamer:

The Dreamer algorithm has recently shown great promise for learning from small amounts of interaction by planning within a learned world model, outperforming pure reinforcement learning in video games. Learning a world model to predict the outcomes of potential actions enables planning in imagination, reducing the amount of trial and error needed in the real environment. [...] Dreamer trains a quadruped robot to roll off its back, stand up, and walk from scratch and without resets in only 1 hour. We then push the robot and find that Dreamer adapts within 10 minutes to withstand perturbations or quickly roll over and stand back up. On two different robotic arms, Dreamer learns to pick and place multiple objects directly from camera images and sparse rewards, approaching human performance. On a wheeled robot, Dreamer learns to navigate to a goal position purely from camera images, automatically resolving ambiguity about the robot orientation.

Stable Diffusion and GPT-3 are impressive, but most problems, physical or non-physical, don't have that much training data available. Algorithms are going to need to get more sample-efficient to achieve competence on most non-physical tasks, and as they do they'll be better at learning physical tasks too.

This is intended to make comment threads more readable, primarily by drawing borders around comments so the nesting structure is more obvious. Also adjusts comment thread whitespace. The last rule limits the bodies of posts and comments to a reasonable width, so lines of text aren't uncomfortably long on large screens. Only tested with the default theme, and better tested on desktop than mobile. Screenshot attached.

Edit: now with proper margins for the 'more comments' buttons that appear for deeply-nested posts.

.comment .comment-collapse-desktop, .comment .comment-collapse-desktop:hover {

  border-left: none !important;

  background-color: var(--gray-400);

  padding-right: 7px;

  border-radius: 7px 0 0 0;

}


.comment .comment-collapse-desktop:hover {

  background-color: var(--primary-light1);

}


.comment .comment-body {

  border: 1px solid var(--gray-400);

  border-left: none;

  padding: 0;

}


.comment, .comment-section > .comment {

  margin: 1rem -1px -1px 0;

  padding-left: 0;

  border-color: var(--gray-400) !important;

  border-width: 5px !important;

  border-radius: 5px 0 0 0;

}


.comment .comment {

  margin-left: 1rem;

}


.comment-anchor:target, .unread {

  background-color: rgba(0, 230, 245, 0.1) !important;

}


.comment-write {

  padding: 1rem !important;

}


.more-comments > button {

  margin: 1rem !important;

}


#post-text, .comment-text, .comment-write {

  max-width: 60rem !important;

}


You can also add this rule if you want to change the font weight and size for post/comment bodies:

#post-text, .comment-text, .comment-write, #post-text p, .comment-text p, .comment-write p {

  font-size: 16px;

  font-weight: 450;

}


I believe the defaults are 14px and 400.

/images/16623978378158753.webp

both will stay incredibly low-status.

The thing is, there's a whole framework in place now for fighting this. Being gay used to be incredibly low-status. Being trans used to be incredibly low-status. Poly, kink, asexuality, etc. The dominant elite culture now says you're required to regard these as neutral at worst, and ideally as brave examples of self-actualization.

The robosexuals are absolutely going to try to claim a place within this framework and demand that people respect their preferences. Elite sexual morality has, at least formally, jettisoned every precept except consent, and there's not much of an argument against this on that basis.

Men tend to like sexual variety, so I'd expect even if the synths are pretty mind-blowing, most men will still be willing to sleep with real women just for a change of pace.

Whether they'll be able to have emotionally intimate relationships with real women is another matter, but if anything I'd be more concerned about that in the other direction. Women often complain that men aren't as emotionally expressive or supportive as they'd prefer. A GPT-4-class LLM that had been RLHF'ed into playing the male lead from a romance novel might already achieve superhuman performance on this task.

Commercial banks could offer higher interest rates on deposits, lend out their own capital, or issue bonds. If this didn't provide sufficient funding for whatever amount of lending the government wanted to see, the government itself could loan money to banks to re-lend.

Really though, the easiest patch to the system would just be for FDIC insurance to (officially) cover unlimited balances, or at least scale high enough that only the largest organizations had to worry about it. It makes no sense to require millions of entities (if you include individuals of moderate net worth) to constantly juggle funds to guard against a very small chance of a catastrophic outcome that most of them aren't well positioned to evaluate the probability of. That's exactly the sort of risk insurance is for.

If the concern is that this will create moral hazard because banks that take more risks will be able to pay higher interest rates and fully-insured depositors will have no reason to avoid them, the solution is just for regulators to limit depository institutions to only taking on risks the government is comfortable insuring against. Individuals should be allowed to take on risk to chase returns, but there's no compelling reason to offer this sort of exposure through deposit accounts in particular. Doing so runs contrary to the way most people mentally model them or wish to use them.

Logically, shouldn't we expect powerful absolutist/totalitarian states to dominate, ceteris paribus?

Totalitarian systems tend to suppress innovation, either deliberately because the powerful fear the social changes they might produce, or unintentionally by restricting ideas and freedom of action. For most of history, innovation was sufficiently slow that this wasn't important, or at least took centuries to catch up with a given society, but the industrial revolution rapidly accelerated the process. Today a nation can become economically and militarily uncompetitive in as little as 10 or 20 years. That's fast enough to register on the planning horizons of current leaders.

China has done well for itself economically over the last few decades, but this was mostly catch-up growth, the adoption of already-existing tech. Here, there's no need to have a society that fosters innovation through freedom of action and a free exchange of ideas, since you're merely deploying the products of innovation that took place elsewhere. There's also much less risk of social disruption, as you can look at the social changes that particular technologies created in countries that deployed them earlier, and shape deployment to ameliorate those (see e.g. the Great Firewall). Incrementally improving an existing technology, such as by refining the manufacturing process for an existing product, has similar properties.

China has yet to demonstrate it's capable of fundamental innovation, of being first to invent and deploy a basically new thing. So far, the signs don't look too great — a tech industry crackdown, a cryptocurrency ban, a requirement for government pre-approval of individual App Store apps. It's hard to believe China won't carefully restrict AI capabilities to a few trusted institutions. I could see them objecting even to something as anodyne as Stable Diffusion before too long, given that it will happily generate as many offensive caricatures as you'd like of Xi Jinping.

In response to your first point, Carmack's "few tens of thousands of lines of code" would also execute within a larger system that provides considerable preexisting functionality the code could build on — libraries, the operating system, the hardware.

It's possible non-brain-specific genes code for functionality that's more useful for building intelligent systems than that provided by today's computing environments, but I see no good reason to assume this a priori, since most of this evolved long before intelligence.

In response to your second point, Carmack isn't being quite this literal. As he says he's using DNA as an "existence proof." His estimate is also informed by looking at existing AI systems:

If you took the things that people talk about—GPT-3, Imagen, AlphaFold—the source code for all these in their frameworks is not big. It’s thousands of lines of code, not even tens of thousands.

In response to your third point, this is the role played by the training process. The "few tens of thousands of lines of code" don't specify the artifact that exhibits intelligent behavior (unless you're counting "ability to learn" as intelligent behavior in itself), they specify the process that creates that artifact by chewing its way through probably petabytes of data. (GPT-3's training set was 45 TB, which is a non-trivial fraction of all the digital text in the world, but once you're working with video there's that much getting uploaded to YouTube literally every hour or two.)

Search looks to be 58% of Google's total revenue, 72% of advertising revenue.

I'd bet search ads also have higher margins than YouTube ads or the non-ad revenue streams.

But if you go hiking occasionally the AI can sell you tents and backpacks and cabin rentals.

Really, outcomes in most markets aren't nearly as perverse as what we see with Tinder. Chrome, for instance, doesn't intentionally fail to load web pages so that Google can sell me premium subscriptions and boosters to get them to load. Unlike Tinder, Chrome is monetized in a way that doesn't provide an incentive for its developer to intentionally thwart me in my attempts to use it for its ostensible purpose, and there's enough competition that if Google tried this people would stop using it.

Or is the claim that the "few tens of thousands" of lines of code, when run, will somehow iteratively build up on the fly a, I don't know what to call it, some sort of emergent software process that is billions of times larger and more complex than the information contained in the code?

This, basically. GPT-3 started as a few thousand lines of code that instantiated a transformer model several hundred gigabytes in size and then populated this model with useful weights by training it, at the cost of a few million dollars worth of computing resources, on 45 TB of tokenized natural language text — all of Wikipedia, thousands of books, archives of text crawled from the web.

Run in "inference" mode, the model takes a stream of tokens and predicts the next one, based on relationships between tokens that it inferred during the training process. Coerce a model like this a bit with RLHF, give it an initial prompt telling it to be a helpful chatbot, and you get ChatGPT, with all of the capabilities it demonstrates.

So by way of analogy the few thousand lines of code are brain-specific genes, the training/inference processes occupying hundreds of gigabytes of VRAM across multiple A100 GPUs are the brain, and the training data is "experience" fed into the brain.

Preexisting compilers, libraries, etc. are analogous to the rest of the biological environment — genes that code for things that aren't brain-specific but some of which are nonetheless useful in building brains, cellular machinery that translates genes into proteins, etc.

The analogy isn't perfect, but it's surprisingly good considering it relies on biology and computing being comprehensible through at least vaguely corresponding abstractions, and it's not obvious a priori that they would be.

Anyway, Carmack and many others now believe this basic approach — with larger models, more data, different types of data, and perhaps a few more architectural innovations — might solve the hard parts of intelligence. Given the capability breakthroughs the approach has already delivered as it has been scaled and refined, this seems fairly plausible.