site banner

Small-Scale Question Sunday for July 2, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

No sci fi really captures the possibilities inherent in generative AI because they’re so significant. Even far-out stuff like Culture or Revelation Space doesn’t really.

I'm really not asking for much, just that an author writing in X AD account for something that was clearly in existence in X-1 AD.

Or at the very least, it would be trivial for the author to make up some excuse for their absence. Say, normally the AI was a standard AGI, but it was intentionally sabotaged during the flight and is in a crippled position. Or the section carrying all the heavy robots in storage was hit by debris that made it past the shielding.

At least some sign that the author is aware of the issue and is attempting to placate my suspension of disbelief.

As per Yudkowsky's take on Vinge's law, it's pretty much impossible for a human to write a compelling superhuman intelligence in the first place, so I am willing to look the other way most of the time.

Still, I was so ticked off at this point that I went to the Author's discord, and boy did I end up on the wrong side of the tracks.

They/Thems for miles (even the author, which I sort of suspected from the emphasis on neo-pronouns and weird additional genders, but I actually don't mind that because it's set hundreds of years in the future and it would be weird if there weren't any running around).

I was confused to see half a dozen bot accounts replying to me, before someone informed me that this was people using "alters", some kind of DID bullshit I presume, since the bot's description explained it was a way for people to talk to themselves as a different persona (???).

I more or less copy pasted my complaints, and was immediately inundated by more They/Thems spouting the absolute worst takes on AI, to the point my eyes bled. At least they were mostly polite about it, but I'm sure they're already well acquainted with accommodating people with weird personal quirks, if you count my intolerance for gaping plot holes as one.

Then the author themselves showed up, and informed me that they were aware of modern AI, yet apparently disagreed on their capabilities and future.

This pretty much killed me outright, so I politely agreed to disagree and left. I am unsure what mechanism he's using to extrapolate the future that requires AI to he worse than they are today after hundreds of years, and I'd rather not even ask.

I guess if all science fiction written after 2024 includes something suspiciously like Butlerian Jihad then be careful what you wish(ed) for?

Well, in my setting, I chose the route of having the initial singularity be aborted by humanity panicking the fuck out regardless of whether the AI was actually aligned or not.

And the justification for limiting the spread of generalized ASI was to prevent that from happening again, with the operational solution being either having AGIs locked to the last human level proven empirically safe, or only allowing narrowly superhuman AGI.

It's a world where Yudkowskian fighter jets dropping bombs on data centers isn't a joke, but they usually go for nukes and antimatter explosives.

I'll leave it to the reader to decide whether that's a bad thing, but at the very least I don't commit the sin of writing about technology worse than today without an explanation of any kind. Butlerian Jihad it isn't though.