domain:ryandv.substack.com
Fair, and maybe a decade or two ago a different focus on the side of trans advocates would have avoided some controversial landmines had they made that decision then. But path dependence is a nightmare; at this point, even assuming that committed (left-?) civil-left-libertarians exist in enough numbers to be a meaningful political force, I don't think this battle of terminology makes the top-ten list, and maybe even not the top-twenty list, of most alienating things.
You'd be amazed. Not by racist fish, but by the pathological need of ~museum curators everywhere to conform to the trend. Pride finds a way.
I think it depends on where you'd put Silver on the cool vs lame scale, but I'd agree it'd be more correct to say that things hit their Apex.
Just speculating, but provided that you already appreciate drawings and can distinguish between better and worse drawings, it should simply be a matter of
-
imitate technique 1
-
recreate technique 1 in varied contexts and applications
-
recreate technique 1 in novel scenarios once general applications have been mastered
-
be able to assess your ability to perform technique 1 by imagining it as someone else’s work
-
do the same for techniques 2-9999
I don’t actually think there is a relationship between “visual reality” and drawing, because the most prized drawings in different cultures do not depict reality but instead “signal” what the mind considers significant information according to the culture. Even the “realistic” renaissance drawings are only emphasizing particular aspects
12 Miles Below has been steadily getting worse over time, but it started at such a high point that it's still okay. I won't spoil the later books, but it gets increasingly self-aware and the humor becomes obnoxious.
They can't do medicine/math/..? Have you tried?
Yes. The number of times I've gotten a better differential diagnosis from an LLM than in an ER is too damn high.
Old Man's War is a better Scalzi work than most, but it makes it there by being a knockoff of the far-better execution of the same concept in Haldeman's The Forever War -- if you haven't read it, I far recommend it. I don't think Scalzi was intentionally ripping that earlier story off, but I'm also exceptionally skeptical that he was unaware of the earlier story or never read it.
It's not for everyone. But if you can handle wanting to reach through the pages to strangle several POV characters, it really is a cornerstone of fantasy for a reason. There are deaths that will make you cry, marriages that will make you want to dance, what is in my opinion the best-written "formless horror" ever (which is admittedly a fairly minor plot device but still, it stands out for how good it is), and moments of such incredible power they will take your breath away.
But those Wetlanders have a strange sense of humor.
I recommend you try it again then, quality is consistent until the very end.
usual Royal Road slop
Different realm!
who hasn't heard of "show, don't tell".
Fang Yuan let out a breath of turbid air. He was an old fox, the aspect of how [a particular aspect of cultivation that isn't even relevant much to the current narration], it was extremely clear to him. As for so-called conciseness and elegance, he did not give a damn. Beauty, ugly, did that really matter to Fang Yuan? He he he... The only thing that mattered was eternal life!
(I will not let this go)
I am a cradle catholic, but with two caveats. First, I was raised in the very definition of a "leafy suburb Novus Ordo" parish. Second, almost all of my 20s I was totally away from the church - zero mass attendance, zero daily prayer.
I'm now a (developing) traditional catholic. Latin mass, much better (re)cathechesis, real theological reading and study - although this last part is largely just do to my ability to sit still now.
However, I didn't have any specific moment of reawakening. The journey was longer and sort of ... academic? I started reading about epistemology when I was working in Data Science. I did this because I found it profoundly preposterous how professional "data scientists" and their managers would find some very weak frequentist statistical relationship between two variables and present it as 100% iron clad evidence for some sort of business decision. After letting myself become jaded with business data science, I wanted to at least recover faith in an analytically rigorous process of both induction and deduction. So, lots of books on epistemology and prob/stat.
Pair this with a growing awareness of culture war topics starting in the mid 2010s. That led me to a much quicker "conversion" from a wishy-washy tits-and-beer lib to an Old Right style conservative. Philosophically, I went hard into the idea that at least the conception of an absolute morality is required for a functioning society.
Thus, you have a combination of adherence to the concept of absolutely morality paired with a constant suspicion in how humans reason and come to believe things (side note: a pure rationalist / empirical stance is epistemic downs syndrome). That's a pretty good petri dish for faith formation. I think that maybe the specific bridging function was reading Alasdair MacIntyre (RIP, homie) combined with all of my latent catholicism - as lame as suburban NO history is.
I'm a big hiker and I do "find God" out there more than I do in other places. I think you said it well in your own post - looking at something the Wyoming Rockies and shrugging it all off as "ehh, random collision of atoms over billions of years. All noise." seems far too trite. It's overwhelming beauty that your brain can't fathom beyond "oh my god this is wonderful" (see what I did there?).
Obviously I'm going to make the unsolicited recommendation that you look into the Roman Catholic Church. Adult cathechesis - at a traditional parish - will tickle your lawyer brain. It's very structured, very grounded in philosophy and theology often in the tradition of St. Thomas Aquinas.
In terms of finding that personal spark, sorry to be trite, but that's on you, bud. There's no way to force it.
I know a lot of people like The Culture series, would you advise me to persevere or try another book or just look elsewhere entirely?
If you're not feeling the overall vibe of Banks' work then I'd encourage you to go ahead and drop him from your list. Life's too short to read stuff that's only kinda appealing and all that. While Banks' novels, be they Culture novels or not, build different worlds and, to an extent, explore different ideas, they all tend to have the same sorts of edges to them, and if that's not engaging you, then you're really not missing anything by letting them go. FWIW, I've read quite a few of his books, and they're not bad by any objective stretch, but at the same time I have several more that I may not ever read because I've lost the desire to engage in his work myself. I'll probably read one in the not-too-distant future, perhaps just because of this comment, but still, there never seems to be a heart to any of his books that I've read.
Thanks for your detailed and passionate take on the AI industry. I've gone through your comment and will fix the typos and grammar while maintaining your original meaning and structure.
Thanks for making a top-level comment because I did not wish to hog that spot. I have posted a few times about how fraudulent and irresponsible the entire AI industry has been. Tangential comment, but I am sure of two things:
This is a bubble that is going to burst, and the people we see hype-posting are aware of it. AGI and ASI are pipe dreams that we don't know will materialize, but their existence is seen as something unstoppable, whereas we may never get there in fact. So what can't these systems do today?
Honestly, nothing well. If you have any white-collar task that is complex, an LLM cannot do it. Various other sub-branches of AI can, in fact, do a lot of things, but even that is minuscule compared to the vast amount of things we do as people. Programming is one thing LLMs just cannot do very well. If your idea of writing code is stuff that would rival what I, as a total noob who is looking for a job and learning, does, then sure, it may be decent, but LLMs are simply bad off-ramps.
Let's dive deeper into the second point since it's the easiest to dismiss. Our understanding of the human brain and intelligence is not complete. Our understanding of how to replicate even what we know well is not complete. LLMs try to do one small subset, and despite having had more money thrown at them than any piece of tech I can think of, all I get is broken code correction and slightly better words. The amount of optimism we see for them is simply not fit, and this is where number 1 comes into play.
Cruise, a self-driving car firm, got shut down recently. Waymo serves fewer people than a strip club in a Tier 3 town in India, and Tesla's cars are still not capable of fully replacing a human being. We are, by many estimates, 90 percent of the way there, but the remaining 10 percent means that we still drive daily. We always have this assumption that things will always get better, but everything has a ceiling. Moore's Law stopped being a thing. The human records for the 100m sprint, Olympic weightlifting, and thousands of other activities were set a while ago and are not even met, forget about breaching them. The idea that things will just get better linearly or exponentially is not true for most phenomena we observe. Yet language models are an exception, so we should spend half a trillion more so that we can get buggier Python code in short.
Coming back to the first point, LLMs are a scam. They are not a scam because the tech is bad—quite the contrary, it's amazing tech. It's just that the entire economic and religious structure behind them is unjustifiable to the point where people 10 years from now will look at this like a combination of Y2K and the dot-com bubble of the 2008 crisis.
OpenAI is the big dog in generative AI and made slightly less than a billion in 2024 via APIs. This means that the market for LLM wrappers itself is tiny. It's so tiny that OpenAI, despite giving away its models for nearly free, cannot get firms to use it. "Free?" Mr. Vanilalsky, you have to be joking; it charged me 20 dollars. Well, you see, they lose money with every single query. That's right, every single time any of us goes to a Western LLM provider's chatbot and says hi, they bleed money. If you pay them 20 dollars, they bleed even more money since you are a power user and get access to their shiny objects. The newest being deep research, which according to some estimates, costs a thousand USD per query. Yes, a thousand. So if you pay 200 dollars to get 100 of them, you can use more graphic card power than what is needed to run a video game arcade in a nation and get analysis that is spiked with SEO shit.
The rationalists and techies are in for a rude awakening, and I won't post about it more, but I cannot comprehend how no one questions how bad this entire thing is. Uber lost money, Facebook lost money, Amazon lost money, except they all were not hogging close to half a trillion dollars the way these AI firms are. OpenAI raised 40 billion dollars, most of which would be given to it via Softbank as they take loans to get the requisite funds. Apple pushed to stop the AI madness by publishing a critical paper, and Microsoft halted the plans for data centers that need more power than Tokyo to run.
I can go on and on about how absolutely insanely stupid the economics for the entire industry are. When non-technical people like Sam Altman and Mira Murati are your number 1 and 2, you have to have messed up. Mira Murati, a career manager who could not answer basic questions like those about SORA's training data, raised 2 billion dollars. I get Ilya and Dario, but the above two are terrible people for leading what was an AI lab if they themselves are not researchers. Dario, on the other hand, alongside Dwarkesh, needs to be considered a safety hazard information-wise. "AI will take away most jobs in a vague timeline, but media will give you the worst figures" is a sickening line.
My cynicism is not unfounded. How do OpenAI, Anthropic, Gemini, etc., plan to make a profit? Do you, dear reader, or do their investors believe that just throwing more graphic cards will solve it? It's been two years; R1 did better because of tighter code, but even that has limits. Training runs are going to touch a billion; people want data that is four times the size of the entire internet, and the inference costs are not coming down with newer models.
Business, in normal circumstances, should make some profit. This may seem heretical, but burning money with worse margins for slightly better products that are being shilled by the entirety of the world's media and the smart people of the rationalist circles in Silicon Valley seems about as good as it gets publicity-wise. OpenAI is one bad round of funding away from having to pack up shop, being valued at 300 something billion dollars.
This makes me angry because I unwittingly worked on a doomed LLM-based idea and know that if I can see the holes, other people would too. It would take one large hedge fund to start shorting American tech and things will be bad. People will lose jobs, funding, and decent startups that are run by and employ people here would see bad times. I will be personally impacted, even if I go the indie hacking route. We can debate the magic of LLMs all day, but how long before this crashes? I feel this is closer to Theranos than to Amazon if we compare it to the 2000 crash time Amazon. We have been promised the world to justify the investor hype, despite tech firms flinching, there is an air of optimism.
Even if you are an optimist, how much better will these models get? When will they justify these numbers? AI hype helped the S&P 500 peak with Nvidia, a firm only gamers knew for decades becoming worth more money than God. What happens when people realize that the tech promises sold were never there? "You will get a better chatbot and better image generation" is not as sexy as "your worst nightmares are coming true for 20 dollars a month." As a child growing up, I saw tech get better. Each year, things got better; the internet got worse, but I saw new devices come up. This is a dystopian image of that world where firms will knowingly crash a market and will probably get bailed out after doing so because I do not see a path forward.
If you are a tech guy, please start telling others about it; the quicker this gets over, the better, because if the investment amounts cross a trillion or something ridiculous, then the fallout would be even worse. I used to look up to a lot of the VC types, read the essays Paul Graham wrote. Somehow his protégé is a total nutbag psychopath, and we should all still look up to him as he tries to crash markets. I lost respect for Scott. Hey, what I wrote is a sci-fi story; do not take it for something rigorous when you criticize it, but if you like it, then do actually take it more seriously. Here is my 8-hour podcast where I butcher the basics of manufacturing to the point where even my subreddit calls me names.
I am about to turn 25; I was a child during 2008, I was born around the dot-com crash, and I saw the glowing fellatios people gave to Elizabeth Holmes and Sam Bankman-Fried, yet neither used up as much money. Even if you are an AI maximalist, you cannot seriously think that modern tech, despite 10 times more money, can actually replace actual programmers or investment bankers or even doctors.
The rationalists have had leftists post bad things about them, David Gerard being one of the worst ones. Guess what, skeptical broken clocks like him are right this time, as his site's pivot to AI gets more right than SSC or ACX does. The guy who had a hand in doxxing a guy we all liked a lot at one point, whose jannie duties were something out of a 4chan greentext, gets this right and will be seen as a sane member of society, which makes me squeamish. I will add the links for all I posted here, mostly taken from Ed Zitron, J. Blow, Hacker News, 4chan /g, etc., but we should all post the financial reality of all these firms anytime we talk. My inabilty to not sound like a lunatic comes from this fact alone. I am not bitter at the people making money btw, I am worried about the future of the people I know, irl and online. Anytime I hear about someone being laid off here or in my circle, I feel terrible. All of this was preventable.
Huh. I had already begun reading that one, made it a few chapters in before I forgot the name and lost the tab in the millions I have open. I do remember thinking it was of above average quality in the usual sea of Royal Road slop!
I think you should try Myth II: Soulblighter. It's a real-time tactics game where the multiplayer really shows how it can shine with regards to troop placement and management. I think the multiplayer community eventually settled on the best formation for melee being all of them packed in shoulder-to-shoulder, though.
Yeah, minimal if you're starting fresh. My extended chats have been for a couple writing and art projects, so it's not getting a full personality context and I can see how it ends up at this pic.
More options
Context Copy link