@SubstantialFrivolity's banner p

SubstantialFrivolity

I'm not even supposed to be here today

5 followers   follows 0 users  
joined 2022 September 04 22:41:30 UTC
Verified Email

				

User ID: 225

SubstantialFrivolity

I'm not even supposed to be here today

5 followers   follows 0 users   joined 2022 September 04 22:41:30 UTC

					

No bio...


					

User ID: 225

Verified Email

Haha, I should've expected that. It's true though!

I finished the book tonight (faster than I expected). Overall I think I came away from it less positive than I was a couple of days ago, but still generally positive. To me, the strongest part of the narrative (though the least interesting as speculative fiction) was parts 1&2 where Mike was a fugitive from a government trying to use him as a pawn. Once that got resolved and Mike turned into space Jesus, I found the plot less interesting (though the ideas Heinlein was exploring were more interesting).

I can certainly see how the book was a big influence on the hippie movement. The ideas Mike teaches are so in line with the hippie ethos that if I didn't know better, I would guess that the book is a parody of them. I read that Heinlein was unhappy that they latched on to his book as they did, though it's not clear to me why. Presumably he thought they didn't get it in some way, but I'm not sure what he might've felt they were missing. Regardless, the optimism of the book - that we would be much happier and better off as a species if we learned to love and share instead of hoarding things to ourselves - is somewhat charming to read, though I wouldn't say that I believe that humans are capable of such a feat.

From a modern standpoint, it is rather shocking to me that this book isn't more criticized than it is. None of it offended me personally, but there's so much in here that is starkly offensive to modern feminist thought that I would have expected people to decry how sexist anyone is if they read this book. In particular, Jill's line about how 9/10 times if a woman is raped, it's partly her fault is the sort of thing for which I would expect Heinlein to have been thoroughly un-personed retroactively (as indeed would happen to anyone today who dared to write such a thing). Forget Starship Troopers, this is the book I think is most subversive to modern day politics, but nobody seems to really talk about it as such.

Xenocide is probably my favorite book in the series, based solely on the strength of the Han Qing-Jao story. I think it's the best thing Orson Scott Card has ever written, and while the other half of the book isn't as good (it's still good), that still averages out super high.

I read (some) books more than once because I love them and enjoy them just as much the second time. Sometimes more, because I will notice new things about the text I hadn't previously. It's not pointless to me, because I read for the enjoyment of the book, not just for novelty. Novelty is nice, but not a requirement. It doesn't even necessarily enhance the experience, as there are plenty of books I enjoyed reading the first time less than I would have enjoyed rereading something else.

I would also say your argument about opportunity cost can easily cut the other direction: if I read a new book, and I dislike it (which certainly happens), I have paid an opportunity cost versus just rereading a book I already liked. So either way, it seems to me that there is an opportunity cost to be paid.

There are people who say "would" to Slaanesh daemonettes; fucking Eldar isn't even something they would blink at.

Well, one schmuck and a handful of carefully selected, credentialed experts. They had reasons for selecting the schmuck, but they didn't really expect him to deliver.

I had a very similar experience when I read Neuromancer a year or two ago. I always knew it was influential, but I didn't get just how influential until I read it. It is honestly an understatement to call it "influential", every cyberpunk setting is basically copied wholesale from Neuromancer. It was pretty wild to see how strong the influence is.

Way of Kings is one of the slowest books Sanderson has written, I'd say. I almost gave up on it because I was waiting for the plot to actually start happening, so I sympathize. If you read his other books (say, Mistborn) they are much better paced. Way of Kings does pick up towards the end, but it takes forever to get there.

Still working on Stranger In A Strange Land and His Broken Body. Making faster progress on Stranger, partly because it's much lighter reading and partly because His Broken Body is part of my bedtime Kindle reading, which means a lot of times I fall asleep before reading very much. I'm making good progress on Stranger, and will most likely finish it sometime this week. I've really enjoyed the book so far, though with some amusement as Heinlein has been turning his central character into a sex god. It is one of those things where you have to laugh and go "man, the 60s really were a different time".

I want the punk. If I'm going to have to put up with corpo dystopia, then I at least want trench coats and neon mohawks, dammit!

Oof, I wasn't aware of that. It's just such a failure of imagination, to me. In a world where magic exists and can change you in all kinds of ways, nobody would be trans as an identity! They would just be a woman (or man) by virtue of magic, and nobody else would ever know who didn't know that person before. If anything, these writers are missing out on some interesting material - in a world where you can change sex as easily as putting on a magical girdle, what do gender roles and the relationship between sexes look like? Surely, nothing like our world, and that could be really interesting to explore! But no, instead people have to waste interesting material by forcing it to be a morality lesson about our world instead of letting the fictional world be its own interesting thing. It's so aggravating. :(

I love it when writers attempt to paint a world entirely in shades of grey while never telling the reader what to think...

Time was, we just called that "good writing". But it's depressingly uncommon these days. :(

And then the characters make some offhand comment about a magic spell that lets you switch gender which certain people who were "born in the wrong body" use to cure their condition.

At least that's better than the BG1 expansion from a few years back, which (in a world where perfectly effective magic to change your sex exists) had a transgender character. It was so fucking stupid that I did not and never will buy that expansion, no matter how good people say it is otherwise.

Refusing to take on the mantle and respect of authority, if that is your calling, does not serve anyone. It just confuses and scares them.

This reminds me of a plot point in the Wheel of Time series. At one point Perrin is back at his hometown which is being attacked by monsters, and people there look to him for leadership (which he consistently refuses, because he doesn't feel like he has the right to command them). It is only when one of the other characters gives him a speech to the effect of what you said, that he realizes he needs to step up and be a leader to those people in a trying time.

I would say that the heft is part of the swing. You lift and make a motion towards your target, but I would not say that the swing starts only once the weapon is at the correct height.

In general, I think this is in fact quite often the shape of the problem - AI critics don't necessarily underestimate AI, but instead vastly overestimate humanity and themselves. Most of the cliché criticisms of AI, including in particular the "parrot" one, apply to humans!

This certainly seems like a salient point (though of course, from my perspective the problem is that you are underestimating humans when you say this). I could not disagree more with your assessment of humans and our ability to reason. And if we can't agree on the baseline abilities of our species, certainly it seems difficult for us to come to an agreement on the capabilities of LLMs.

Your argument only really makes sense insofar as one agrees that there is substance behind the hype. But not everyone does, and in particular I don't. So to me, the answer to your last question is "but the world hasn't changed". You seem to disagree, and I'm not going to try to change your mind - but hopefully you can at least see how that disagreement undermines the foundation of your argument.

I am not interested in debating the object level truth of this topic. I have engaged in such debates previously, and I found the arguments others put forward unpersuasive (as, I assume, they found mine). I'm not trying to convince @self_made_human that he's wrong about LLMs, that would be a waste of both our time. I was trying to point out to him that however much he thinks he is critical of LLMs (and to his credit he did provide receipts to back it up), that is not how his posts come off to observers (or at least, not to me).

It would be one thing if I was arguing solely from credentials, but as I note, I lack any, and my arguments are largely on perceived merit.

Note that I'm not saying you are not arguing from your credentials. But rather, you are arguing based on the credentials of others with the statement "In the general AI-risk is a serious concern category, there's everyone from Nobel Prize winners to billionaires". Nobel Prize winners do have credibility (albeit not necessarily outside their domain of expertise), but that isn't a decisive argument because of the fallacy angle.

Even so, I think that calling it a logical fallacy is incorrect...

This is, to be blunt, quite wrong. Appeal to authority is a logical fallacy, one of the classics that humans have noted since antiquity. Authorities can be wrong, just like anyone else. This doesn't mean your claims are false, of course, just that the argument you made in your previous post for your claims is weak as a result.

What of it? I do, as a matter of fact know more about LLMs than the average person I'm arguing with.

I simply think it's funny. If it doesn't strike you as humorous that your statement would be agreed upon by all (just with different claims as to who has the bad takes), then we just don't share a similar sense of humor. No big deal.

I mean, you're not alone but neither are the people who argue against you. That is hardly a compelling argument either way. Pointing to the credentials of those who argue with you is a better argument (though... "being a billionaire" is not a valid credential here), but still not decisive. Appeal to authority is a fallacy for a reason, after all. Moreover, though I'm not well versed in the state of the debate raging across the CS field, so I don't have tabs on who is of what position, I have no doubt whatsoever that there are equally-credentialed people who take the opposite side from you. It is, after all, an ongoing debate and not a settled matter.

Also, frankly I agree with @SkoomaDentist that you are uncritical of LLMs. I've never seen you argue anything except full on hype about their capabilities. Perhaps I've missed something (I'm only human after all, and I don't see every post), but your arguments are very consistent in claiming that (contra your interlocutors) they can reason, they can perform a variety of tasks well, that hallucinations are not really a problem, etc. Perhaps this is not what you meant, and I'm not trying to misrepresent you so I apologize if so. But it's how your posts on AI come off, at least to me.

Somewhat off-topic: the great irony to me of your recent "this place is full of terrible takes about LLMs" arguments (in this thread and others) is that I think almost everyone would agree with it. They just wouldn't agree who, exactly, has the terrible takes. I think that it thus qualifies as a scissor statement, but I'm not sure.

It gets easier as you go. I remember when I was learning how to drive, the first time I merged from an on-ramp was super stressful. And that wasn't even a highway, that was a city road where traffic was going 35-40 mph! But now after years of practice, it's second nature to me. Keep it up, brother!

Is that really a thing?

Maybe so, although state elections also aren't relevant to people outside the state. I couldn't care less who is the governor of NY, nor the mayor of NYC, because it doesn't affect my life one iota.

That's what I thought too. I guess I'm not degen enough.

I have a few use cases.

  • Shitposting. By far the most value I get out of LLMs, to be honest - asking ChatGPT to generate a story where a friend had a steamy romance with Optimus Prime (and then sending it to said friend) had me giggling for like an hour after.
  • Spanish practice. I hold LLMs at arms' length because of the way they work (being based around predicting what the next token will be rather than actual understanding of the problem domain), but that approach works just fine for language because it's how we learn language. So I have a lot more willingness to accept the methodology in this problem domain. Plus I don't have any other chances to practice Spanish (cause it isn't socially acceptable to just go up to people who look Latino and talk Spanish to them), so even if it's flawed it's the best I have.
  • Generating bash scripts at work. A bash script should be very short (10-20 lines), which means LLMs tend to perform better, and it's easy for me to check at a glance (or at worst, check the syntax is correct in the shell). That said as soon as you get outside bash syntax, there be dragons - LLMs do not (in my experience) do well with things like generating curl requests for vendor APIs. The basic syntax is almost always correct though, which is useful to me because I loathe writing bash.
  • Similarly to the above, generating example code for APIs that I know well enough to recognize at a glance if it's correct, but not well enough to write myself without having to poke through the docs. For example, the python threads API. I can ask an LLM to generate a script doing X with threads, and I know instantly whether it's correct, but it would take me probably 30 minutes of poking at the threading docs to write it myself.

All in all, not a ton of actual value for me, but it is non-zero value. Unfortunately LLMs still fall over pretty hard when I try to hand them things that are more challenging for me. For example, recently I asked ChatGPT to do some weird conditional thing in Terraform (which turned out to be impossible as far as I can tell), and instead of saying "that's not possible" (useful, would've saved me a lot of time going down a bad path) it kept hallucinating code which was very sensible and would be nice if it worked, but isn't actually valid syntax. This is unfortunate because that's where the real value would be - I don't need or want an LLM to write code which I can very easily write myself (faster than it'll take me to check the LLM output), but I would like it to assist with things that are on the edges of my subject matter knowledge. Alas, that doesn't really work well right now, but I do get some minor value from the cases I mentioned.