@roystgnr's banner p

roystgnr


				

				

				
0 followers   follows 0 users  
joined 2022 September 06 02:00:55 UTC
Verified Email

				

User ID: 787

roystgnr


				
				
				

				
0 followers   follows 0 users   joined 2022 September 06 02:00:55 UTC

					

No bio...


					

User ID: 787

Verified Email

information theory broadly defines some adjacent bounds.

Don't forget physics. We're probably nowhere near the limit of how many computational operations it takes to get a given "intelligence" level of output, but whatever that limit is will combine with various physical limits on computation to turn even our exponential improvements into more logistic-function-like curves that will plateau (albeit at almost-incomprehensible levels) eventually.

at the scale of economics, "singularity" and "exponential growth" both look darn similar in the near-term, but almost all practical examples end up being the latter, not the former.

"Singularity" was a misleading choice of term, and the fact that it was popularized by a PhD in mathematics who was also a very talented communicator, quoting one of the most talented mathematicians of the twentieth century, is even more bafflingly annoying. I get it, the metaphor here is supposed to be "a point at which existing models become ill-defined", not "a point at which a function or derivative diverges to infinity", but everyone who's taken precalc is going to first assume the latter and then be confused and/or put-off by the inaccuracy.

That said, don't knock mere "exponential growth", or even just a logistic function, when a new one outpaces the old at a much shorter timescale. A few hundred million years ago we got the "Cambrian explosion", and although an "explosion" taking ten million years sounds ridiculously slow, it's a fitting term for accelerated biological evolution in the context of the previous billions of years of slower physical evolution of the world. A few tens of thousands of years ago we got the "agricultural revolution", so slow that it encompasses more than all of written history but still a "revolution" because it added another few orders of magnitude to the pace of change; more human beings have been born in the tens of millennia since than in the hundreds of millennia before. The "industrial revolution" outdoes the previous tens-of-millennia of cumulative economic activity in a few centuries.

Can an "artificial superintelligence revolution" turn centuries into years? It seems like there's got to be stopping point to the pattern very soon (years->days->tens-of-minutes->etc actually would be a singular function, and wouldn't be enabled by our current understanding of the laws of physics), so it's perhaps not overly skeptical to imagine that we'll never even hit a "years" phase, that AI will be part of our current exponential, like the spreadsheet is, but never another vastly-accelerated phase of growth.

You're already pointing out some evidence to the contrary, though:

real humans require orders of magnitude less training data --- how many books did Shakespeare read, and compare to your favorite LLM corpus --- which seems to mean something.

This is true, but what it means from a forecasting perspective is that there are opportunities beyond simple scaling that we have yet to discover. There's something about our current AI architecture that relies on brute force to accomplish what the human brain instead accomplishes via superior design. If we (and/or our inefficient early AIs) manage to figure out that design or something competitive with it, the sudden jump in capability might actually look like a singular-derivative step function of many orders of magnitude.

I've only looked at his introductory post, so hopefully he addresses my point later, but the introductory post would seem to be the natural place to discuss why we don't have more amendment, and he does some discussion of that question, but with what I feel is only one of the multiple answers:

"...you need about 85%+ public support to ratify a constitutional amendment. It’s pointless because, if you could ever get that much public support for your divisive policy question, you’d no longer need a constitutional amendment, because you’d have won the argument and all the relevant laws already."

This is true for many object-level laws, but there are loads of exceptions. An Amendment allows you to credibly precommit to not change laws later, which makes it attractive for a number of tasks:

  1. Rules intended to protect human rights, where we fear our descendants might backslide enough to repeal a mere law but not enough to overturn an amendment.
  2. Rules intended to be compromises via universalizing principles, for which a law isn't enough to enforce the compromise. If I hate being unable to condemn some right-wing ideology and you hate being unable to condemn some left-wing ideology, I might hate the thought of losing my freedom to censors half the time more than I relish the thought of the same happening to you the other half of the time, and something like the First Amendment is a win for both of us, even if we couldn't get a coalition to protect either ideology alone. In a bad enough Culture War making such a principle into law may feel like it's just giving the other side a chance to get a 4+ year head start on attacking us again when they repeal the law first while they're in power, but an amendment might have more teeth.
  3. Rules which cover the biggest meta-level questions of how the mechanisms of government should work, the cases where the constitution already specifies a mechanism that can't be overridden by a mere law. The House Rules Committee can do a lot, but it can't reduce the requirements for overriding a Presidential veto (his proposal #1), or expand its size to 11,000 (his #4), etc.

And pretty much every one of his proposals falls into category 3 here, doesn't it? He's not suggesting a "Write the Roe v Wade penumbras into the umbra" amendment, or a "define personhood as starting with conception" amendment; all his stuff is procedural at a high enough level that you can't do it without an Amendment.

So ... why don't we do any of those Amendments, either, anymore? I'd say it's a combination of our increasing political polarization with the realization that, so long as we're trapped by Duverger's Law into a two-party system, every meta-level change is also a potential change in the equilbrium point of that system, a zero-sum game. Either more easily overridden vetos will mostly help the Democrats, in which case you're not going to get a supermajority because you can't persuade enough of the Republican-leaning half of the country to agree, or they will mostly help the Republicans, in which case you're not going to get a supermajority because you can't persuade enough of the Democratic-leaning half of the country to agree. Perhaps at some point we'll have enough people sick of both parties that that will be a voting block worth catering to? But until then this is all a sadly academic discussion.

This is pure "scissor statement" video, isn't it? It seems clear that the driver isn't trying to hit the cop, since she's steering away from him and away from the direction he's moving in, but even with three angles to look at it's not clear to me whether she hits him anyway (I think not, but I won't be surprised if badge cameras prove me wrong) or whether she would have hit him had he not already been dodging to one side (I think so, but again awaiting further evidence).

I think what makes up my mind is that I don't think the situation was clear to the officer either, not if he's having to make this decision so fast that his detractors are having to replay the clips in slow motion. In hindsight he could have done better, but we want to be able to hire even average cops, for a job where they'll frequently be surrounded by people who hate them and try to kill them, and that's not going to be possible unless we take seriously the sorts of "mens rea"/"reasonable person" requirements we should have to prosecute what might be a natural attempt at self-defense.

Back in the day, Saturday Night Live recognized that this was a funny joke:

"I think a good gift for the president would be a chocolate revolver. And since he's so busy, you'd probably have to run up to him and hand it to him."

It wasn't because they were a bastion of right-wing television, or because they thought Clinton had given murderous orders to his secret service agents, it was because they recognized that it would be ridiculous to do something that looks so threatening, even something actually innocent, without anticipating the likely consequences.

I don't even think that Yudkowsky was the best thinker on LessWrong. Both David Friedman and Scott Alexander (when he was on) surpass him easily IMO.

This is trivia, not science, but for kicks I decided to see how many LessWrong quotes from each user I've found worth saving over the years: Yudkowsky wins with 18 (plus probably a couple more; I didn't bother making the bash one-liner here robust), Yvain (Scott) takes second with 10, and while I have dozens of Friedman quotes from his books and from other websites, I can't find a one from LessWrong that I saved. (was Friedman was just a lurker on LessWrong?)

On the other hand, surely "best" shouldn't just mean "most prolific", even after a (grossly-stochastic) filter for the top zero-point-whatever percent. Scott is a more careful thinker, and David more careful still, and prudence ought to count for something too ... especially by Yudkowsky's own lights! We praise Newton for calculus and physics and downplay the alchemy and the Bible Code stuff, but at worst Newton's mistakes were merely silly, just wastes of his time. Eliezer Yudkowsky's most important belief is his conclusion that human extinction is an extremely likely consequence of the direction of progress currently being pursued by modern AI researchers, who frequently describe themselves as having been inspired by the writings of: Eliezer Yudkowsky. I'm not sure how that could have been avoided, since the proposition of existential AGI risks has the proposition of transformative AGI capabilities as a prerequisite and there were naturally going to be people who took the latter more seriously than the former, but it still looks superficially like it could be the Ultimate Self-defeat in human history, in both senses of that adjective.

PTSD, symptoms of which were recorded in the medical literature as far back as ancient Greece, as a mechanistic biological response to extreme injury.

Huh. Learn something new every day.

"An Athenian, Epizelos son of Kouphagoras, was fighting as a brave man in the battle when he was deprived of his sight, though struck or hit nowhere on his body, and from that time on he spent the rest of his life in blindness. I have heard that he tells this story about his misfortune: he saw opposing him a tall hoplite, whose beard overshadowed his shield, but the phantom passed him by and killed the man next to him." - Herodotus, "Histories"

I know "PTSD" used to be called "combat hysteria", then "war neurosis", then "battle hypnosis" and "shell shock", and with one name or another it seems to have been common for well over a century ... but I'd been told it's hard to find under any name in accounts of ancient wars. It was tempting to wildly speculate whether the reason for such a strange interesting fact might be technological (after explosive overpressure we can see physical brain bruising, not just psychological damage; we now experience most casualties from impersonal random explosions, not other humans in direct combat) or cultural (we now see a diagnosis of psychological trauma as a first step toward healing, rather than an insulting additional attack to be avoided; we now see war as a necessary evil, rather than a glorious good) or social (the ancient veterans that historians focus on were often large proportions of the upper class; modern veterans are more likely to be isolated). But it's easy to forget that often the explanation for a strange interesting fact is that false and exaggerated "facts" can go viral if they're sufficiently strange and interesting.

@yofuckreddit: Ask your doctor about dosages, too, when you go in for the surgery. When my son had a broken bone healing, the osteopath recommended levels that, although sold over the counter in the vitamin aisle, still had "don't take this without talking to a doctor about it" in the fine print on the jar. Unless you're super prone to kidney stones or something, long-term concerns about hypercalcemia can probably take a temporary back-burner to short-term bone healing improvements.

Did you see the video? I couldn't find a link (Voat's shut down, and a low-res thumbnail plus headline wasn't sufficient for my Google-fu), but I'd have guessed it would be anecdata rather than anything with which we could hope to calculate a frequency.

That third headline was easy enough to find the context for, though. It's on the witchiest-looking website you could imagine, and it's a little hyperbolic (house arrest with an electronic monitor isn't quite "roaming" "freely"), but it's hard to say that it was too hyperbolic, with at least a couple years of hindsight:

One condition of home-arrest required Huff to seek preapproval from a parole officer before having contact with children. But Huff was temporarily returned to prison in late 2018 after an eight-year-old girl was found in his apartment along with her parents.

In January 2019 the clemency board unanimously revoked Huff’s home-arrest and made his return to prison permanent. His only option now is to reapply once a year for release.

Is Wall Street allowed to get much money involved yet? Polymarket.com still lists the US as "blocked"/"completely restricted from accessing Polymarket", Wiki claims the block lasted until December 2, 2025 after Trump "eased the regulatory environment" (with a link to a headline that only mentions trading on election results), and I can't find anything that lists what the current regulatory environment actually permits.

and build a base of clientele and advance

Do you know if there are any good stats on what percent of lawyers are making excellent livings after they take some time to advance? New lawyer salaries have been scarily bimodal for decades now, but it's hard to tell the extent to which that's a career-long problem rather than something the lower half of the distribution just has to work their way out of over 5 or 10 years.

That's not a bad point. I'm old enough that "find a spouse before OKCupid gets bought out" was actually an actionable strategy for me, so I'm hearing the awful reports of modern dating apps second-hand, and I don't actually hear anything about modern non-app-based online dating.

Does it really exist, though? Naively, I'd have expected random Discord channels to be subject to the same social dynamics that work/school/etc were, wherein now that there's a "Find Your Dates Here" Schelling Point of the apps, more and more of the younger generations are starting to consider any but the most slow/careful flirting in ostensibly non-romantic contexts to be intrusive and creepy.

Thank you! I probably saw the reference in that very post and then forgot that I had.

Huh - it goes to that "Removed By Moderator" page for me too. I swear I just copied and pasted straight from my browser's address bar. Actually, this is weird - I copied and pasted a www.reddit.com address, I see a www address in the markdown when I hit "Edit" on it, but when I hover over the link I see an old.reddit.com address. Looks like that might be an rDrama bug "feature"? Then, while the www.reddit.com address works, the old.reddit.com address doesn't.

Try this link for the full post rather than just the png - both www.reddit.com and old.reddit.com work for that so it should survive any mangling.

Rather unlikely. What percentage of college-educated middle-class women are on dating apps anyway?

For heterosexual couples as a whole, a majority now are couples who met online. I don't know if there's any way to get a breakdown by education level for internet dating specifically, but more-educated people have never been less likely than less-educated people to use the internet in general.

resort to online dating apps

Are you posting from 1999? Boy, are you in for some nasty surprises [edit: this link seems to work better]. Online dating never gets any better, mind you; the alternatives just keep getting worse. I recommend trying to find a spouse before OKCupid gets bought out. Oh, and if you find yourself on a plane getting hijacked, ignore all the "just cooperate and don't get hurt" protocol; they're not just flying to Cuba next time.

I like the general idea of having kids, and I think I'd be decent at raising older kids, but with little kids I'm totally lost.

I was like this when I was young, but I didn't realize what became obvious in hindsight: your own little kids will be your own little kids. They'll be genetically half you and half your spouse, and environmentally some mix in which (especially when they're little) you're still a plurality.

My oldest kid binge-read the Harry Potter series when she was 5 and decided that my reading to her for 20 minutes a night was way too slow. When her little brother was 8 or 9 he thought my home group-theory lessons during Covid were amazing. Their little sister picks Babylon 5 episodes for her every turn at Family Movie Nights lately. Now, you may be thinking, "wow, what unbelievable geeks", but that's exactly the point - I'm kind of an annoying geek, and my wife isn't annoying, and it's not much of a coincidence that we got a trifecta of exactly the sort of non-annoying geeks we're thrilled with, even if they might not stand out as positively to other random adults. Whatever personality/subculture you may have and/or have fallen in love with instead, that's what you can probably expect instead, and even if you're not a big fan of little kids in general you might be much more enamored of your own little kids in particular.

What's stopping him from letting his kids be free range ? The restrictions feel self-imposed.

The classic viral image showing children's shrinking ranges comes from this Daily Mail article in 2007. The article seems to agree, and to make a good case for, the idea that the increasing restrictions are unnecessarily self-imposed by parents. I mostly agreed at the time. It wasn't until years later that I saw that map again and did a double-take at the place names...

Four year old and toddler is a bit annoying, because the four year old talks much better than the toddler, making mutual play a bit difficult

My then-2-year-old daughter got so upset once when she was trying to play with her younger cousin: "[Cousin's name] not listening to me!!!" "Honey, she's 1. She barely understands you." It was short-lived frustration, though.

Do you believe in therapists?

I don't just believe in them, I've seen them!

More seriously: talk therapy rescued one of my kids from some crippling anxiety issues, but it wasn't the first therapist we tried who did so. For her it was the second, but I've heard that that's better odds than average. Therapists are like teachers: quality varies way more than it should, and someone sufficiently motivated can get by with self-study alone, but if you need one then you'd be silly to deny it.

To "accelerate the contradictions" (or in other popular phrasing "heighten the contradictions" or "sharpen the contradictions") seems to be a Leninist idea originally. And to be fair, Lenin did get the total revolution he wanted rather than the more incremental improvements that occurred (and that leave extremists fuming) elsewhere.

There seems to be a continuous spectrum of these ideas in Marxist (and probably wider leftist) thought, though. The idea that you can bring about The Revolution faster by making things worse shades into the idea that you just shouldn't try to delay The Revolution by trying to make things incrementally better, and both are similar to but distinct from the idea that you can't do much either way because prophecy psychohistory dialectical materialism proves that The Revolution will come when it's destined to regardless.

That article covered "meals, housing and autism therapy fraud cases" being prosecuted. The video here was about childcare fraud going unprosecuted.

conservatives by and large don't read.

Good thing that problem only afflicts Them, not Us!

There's nothing in there that can't be improved upon by a writer working with an LLM.

There's nothing in there that can't be improved as prose, but are you entirely sure that the changes will be improvements as game writing?

I like Table Top RPGs, despite them being worse than some Computer RPGs in every way but one, and the one way they're better is the way that matters here: in a TTRPG, your players don't have to be railroaded nearly so strictly. When the players try to dig deep into the interactions with some character, there can always be something rewarding they can dig deep into. Once the Game Master runs out of official quest writeup material, he can start to improvise, and those improvisations can actually affect all subsequent gameplay. It's quite common for players to develop an attachment to someone like that elderly forgotten veteran NPC, who the GM can then slot into other parts of the story, on the fly, as a recurring side character, making the story much more fun and interesting. In the longest-running game I run, my players have one originally-mid-level mook who's managed to escape enough fights to become a recurring villain (with some hilarious banter), and even have another three mooks who (via vast interleaved efforts of diplomacy and subterfuge) they've managed to semi-reform and (despite some lingering head-butting with PCs and each other) recruit as underlings. The written adventures for this campaign included some designed-as-recurring-character NPC friends and villains, too, of course, but these four were all characters who were written with at most a short backstory but who were expected to be eliminated in the first encounter if the players had been aggressive enough and their dice rolls lucky enough. We're all glad they weren't.

In a CRPG ... do you want to let the AI rewrite your game on the fly, like a GM does, not just write things you can review in advance? Writing on the fly is probably an AGI-complete problem. If you've got an LLM that you trust not to make its part of your game worse than your part then you might as well let it write your part too. But if all your writing is done in advance, that won't let you have long-term effects on the story. The possibilities you'd have to write for grow exponentially with elapsed gameplay, as more story elements arise and more combinations in which they might affect Ascended Extras' actions accumulate. If you instead do a lot of writing in advance without letting the now-fleshed-out side characters have long-term effects on the story, that just tricks the player with false affordances: instead of interacting with a world where ten characters have deep dialogue trees and obviously are critical to the story and another hundred characters quickly get to a loop with nothing new to say and are obviously scenery after that, you'd be giving them a world where ten characters have deep dialogue trees and are critical to the story while another hundred characters have deep dialogue trees but are still going to be plot dead-ends after those trees are finally exhausted.

Roger Ebert infamously took the stance that "video games can never be art", which was nonsense, but the interactivity of games is a bit of a two-edged sword: on the one hand it's an additional capacity that can make video games much better art than non-interactive media, but on the other hand it puts the artist even more at the mercy of the audience than is the case in other media. Someone may fail to understand what you intended them to understand from your painting, but at least once they're part of your painting's audience they'll see what you intended them to see. If you want to make art in the form of a game, however, everyone in your audience is also your collaborator, and your job isn't just to make them understand a finished product, it's to guide them into helping properly finish that product with you, and part of that guidance is making it easier to see which parts of the work they should focus on the most and which are just intended to be out-of-focus background. Making the background more beautiful would be an improvement, all other things being equal, but making it more beautiful without accidentally bringing it to a spot in the foreground where it shouldn't be is much trickier. The reason why new fiction writers always have to be told to be unafraid to "kill your darlings" is that it's true but non-obvious that most authors' writing can be best improved not by expanding it but by cutting it, removing the digressions and infodumps and red herrings and detached side plots and on and on until you're left only with the things that most contribute to the story. Game writers (and level designers, and so on) have a much harder problem, because even if you avoid handing the player a pointless distraction the player might seek it out anyway, and they'll enjoy the game less as a result even if they don't understand why. I recommend playing the Half Life 2 Episode 1+2 with Director's Commentary - some of the most interesting tidbits there are tricks with which they coax players into actions as simple as looking in the right direction at the right time to see a scripted event, while not actually taking any control away from the player or even letting most players realize they'd been maneuvered into making the decisions they did.

Public Service Announcement for anyone who might want to read Project Hail Mary (the book) and hasn't yet: the trailers for Project Hail Mary (the movie) contain major spoilers, for something like a quarter of the most interesting plot developments in the book, without the context that made those developments as interesting as they were.

My kids and I had already read the book, but I feel bad for anyone who would have wanted to read it but didn't know about it or just didn't get to it yet.

Anyone who was a fan of The Martian and would also enjoy something a little less dry (at the cost of being less grounded; this time there's a vital plot device that's a much bigger stretch than "implausibly strong dust storm") should read Project Hail Mary ... and if somehow you've also avoided seeing any of the movie trailers yet, you should read Project Hail Mary quickly, and until you're at least halfway through the book you shouldn't see Ryan Gosling's face (possibly disguised by a beard - don't be fooled!) pop up on a screen without immediately closing your eyes, covering your ears with your hands, and loudly saying "La la la la" for the next three minutes.

Just to be pedantically clear (I saw your comment on the chronological page and didn't realize it was correct in context until I looked at the parent), a null pointer dereference invokes well-defined behavior of bounded badness in Java. In C, a null pointer dereference is Undefined Behavior and so is still allowed to lead to arbitrary code execution, both in theory and in practice.

How likely would that have been? I know international relations are fickle, but they usually only turn on a dime in cases where an alliance of convenience is papering over underlying hostility or where one party's government is utterly replaced by hostile opposition.

Should the C compiler let you declare a function that returns a value and then let you omit the return statement? Is that mistake your fault or the language's fault? Formally doing this is undefined behavior but that does not always mean crash!

It's the language's fault (that probably should never have been allowed by the standard, and if it wasn't then the compiler could catch it by default) and it's your fault (you shouldn't have written that), and it's other language users' fault.

That third one might take a bit of explanation.

Any decent compiler these days will warn you about that error at compile time, and will stop the compilation if you use a flag like -Werror to turn warnings into compile-time errors. So just always use -Werror, right? We could all be writing a safer version of C without even having to change the C standard! Well, "look for functions that declared a return value but didn't return one" is an especially easy error for a compiler to catch, but there are others that are trickier but more subtle. Maybe you add -Wall to get another batch of warnings, and -Wextra with another batch, and you throw in -Wshadow and -Wunused-value and -Wcast-qual and -Wlogical-op and ... well, that's a great way to write your code, right up until you have to #include someone else's code. At some point your OCD attention to detail will exceed that of the third-party authors who wrote one of your libraries, and you can't always fault them for it (these warnings are often for code that looks wrong, whether or not it is wrong - even omitting a return statement could probably save one CPU cycle in cases where you knew the return value wasn't going to be used!). So, I have special headers now: one to throw a bunch of compiler pragmas before #include of certain third-party headers, to turn off my more paranoid warning settings before they can hit false positives, then another to turn all the warnings back on again for my own code, like a primitive version of "unsafe".

I was once paid to port C code from a system that allowed code to dereference null pointers (by just making the MMU allow that memory page and filling it with zeroes). And so the C code written for that system used that behavior, depending on foo = *bar; to set foo to 0 in cases where they should have written foo = bar ? *bar : 0; instead. As soon as you give people too much leeway, someone will use it, and from that point onward you're a bit stuck, unable to take back that leeway without breaking things for those users. I like the "nasal demons" joke about what a compiler is allowed to do when you write Undefined Behavior, but really the worst thing a compiler is allowed to do with UB is to do exactly what you expected it to, because then you think you're fine right up until the point where suddenly you're not.

Types should be explicitly written out in code! They're a very important part of the logic!

Sometimes types shouldn't be explicitly written out in code because they're a very important part of the logic. If I write generic (templated) code that returns the heat capacity of a gas mixture at a given temperature, sometimes I just want that temperature to be a double so I can get a quick answer for a point's heat capacity, and other times I want it to be a Vector<DualNumber<double, SparseVector<int, double>>> so I can get SIMD or GPU code that gives me a hundred points' heat capacities as well as their derivatives with respect to the input data that was used to calculate temperature. There's basically no way I'm writing intermediate data types for such a calculation as anything but auto.

When designing even simpler library methods I'm also sadly kind of a fan of users writing auto out of laziness, too. If I ever accidentally expose too much of my internal data structures, use too small of a data type, etc. and have to change the API later, often I can change it in such a way that lazy auto users are still fully compatible with the upgraded version, but users who explicitly wrote foo::iterator can't compile after my switch to bar, and users who explicitly wrote int are now slicing my beautiful new size_t and are going to be unhappy years later when they run a problem big enough to overflow 2^31.