self_made_human
amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
I've done my time with Stable Diffusion, from the closed alpha to a local instance running on my pc.
Dedicated image models, or at least pure diffusion ones, are dead. Nano Banana does just about everything I need. If I was anal about the drop in resolution, I'd find a pirate copy of Photoshop and stitch it together myself, I'm sure you can work around it by feeding crops into NB and trusting they'll align.
All of the fancy pose tools like ControlNet are obsolete. You can just throw style and pose references at the LLM and it'll figure it out.
I suppose they might have niche utility when creating a large, highly detailed composition, but the pain is genuinely not worth it unless you absolutely must have that.
I wanted to write a post about some of these events, specifically the change in attitude for the titans of industry like Linus Torvalds and Terence Tao. I'm no programmer, but I like to peer over their shoulders, I know enough to find profoundly disorienting, seeing the creator of Linux, a man whose reputation for code quality involves tearing strips off people for minor whitespace violations, admit to vibe-coding with an LLM.
Torvalds and Tao are as close to gods as you can get in their respective fields. If they're deriving clear utility from using AI in their spheres, then anyone who claims that the tools are useless really ought to acknowledge the severe Skill Issue on display. It's one thing for a concept artist on Twitter to complain about the soul of art. It is quite another for a Fields Medalist to shrug and say, "Actually, this machine is helpful."
Fortunately, people who actually claim that LLMs are entirely useless are becoming rare these days. The goalposts have shifted with such velocity that they've undergone a redshift. We've moved rapidly from "it can't do the thing" to "it does the thing, but it's derivative slop" to "it does the thing expertly, but it uses too much water." The detractors have been more than replaced by those who latch onto both actual issues (electricity use, at least until the grid expands) and utter non-issues to justify their aesthetic distaste.
But I'm tired, boss.
I'm sick of winning, or at least of being right. There's little satisfaction to be had about predicting the sharks in the water when I'm treading that same water with the rest of you. I look at the examples in the OP, like the cancelled light novel or the fake pop star, and I don't see a resistance holding the line. I see a series of retreating actions. Not even particularly dignified ones.
First they ignore you, then they laugh at you, then they fight you, then you win.
Ah, the irony of me being about to misattribute this quote to Gandhi, only to be corrected by the dumb bot Google uses for search results. And AI supposedly spreads misinformation. It turns out that the "stochastic parrot" is sometimes better at fact-checking than the human memory.
Unfortunately, having a lower Brier score, while good for the ego, doesn't significantly ameliorate my anxiety regarding my own job, career, and general future. Predicting the avalanche doesn't stop the snow. And who knows, maybe things will plateau at a level that is somehow not catastrophic for human employability or control over the future. We might well be approaching the former today, and certain fields are fucked already. Just ask the translators, or the concept artists at Larian who are now "polishing" placeholder assets that never quite get replaced (and some of the bigger companies, like Activision, use AI wherever they can get away, and don't seem to particularly give a fuck when caught out). Unfortunately, wishing my detractors were correct isn't the same as making them correct. Their track record is worse than mine.
The TEGAKI example is... chef's kiss. Behold! I present a site dedicated to "Hand-drawn only," a digital fortress for the human spirit, explicitly banning generative AI. And how is this fortress built? With Cursor, Claude, and CodeRabbit.
(Everyone wants to automate every job that's not their own, and perhaps even that if nobody else notices. Guess what, chucklefuck? Everyone else feels the same, and that includes your boss.)
To the question "To which tribe shall the gift of AI fall?", the answer is "Mu." The tribes may rally around flags of "AI" and "Anti-AI," but that doesn't actually tell you whether they're using it. It only tells you whether they admit it. We're in a situation where the anti-AI platform is built by AI, presumably because the human developers wanted to save time so they could build their anti-AI platform faster. This is the Moloch trap in a nutshell, clamped around your nuts. You can hate the tool, but if the tool lets your competitor (or your own development team) move twice as fast, you will use the tool.
We are currently in the frog-boiling phase of AI adoption. Even normies get use out of the tools, and if they happen to live under a rock, they have it shoved down their throats. It's on YouTube, it's consuming TikTok and Instagram, it's on the damn news every other day. It's in your homework, it's in the emails you receive, it's you double checking your prescription and asking ChatGPT to explain the funny magic words because your doctor (me, hypothetically) was too busy typing notes into an Epic system designed by sadists to explain the side effects of Sertraline in detail.
To the extent that it is helpful, and not misleading, to imagine the story of the world to have a genre: science fiction won. We spent decades arguing about whether strong AI was possible, whether computers could be creative, whether the Chinese Room argument held water. The universe looked at our philosophical debates and dropped a several trillion parameter model on our heads.
The only question left is the sub-genre.
Are we heading for the outcome where we become solar-punks with a Dyson swarm, leveraging our new alien intelligences to fix the climate and solve the Riemann Hypothesis? Or are we barrelling toward a cyberpunk dystopia with a Dyson swarm, where the rich have Omni-sapients in their pockets while the rest of us scrape by in the ruins of the creative economy, generating training data for a credit? Or perhaps we are the lucky denizens of a Fully Automated Luxury Space Commune with optional homosexuality (but mandatory Dyson swarms)?
(I've left out the very real possibility of human extinction. Don't worry, the swarm didn't go anywhere.)
The TEGAKI example suggests the middle path is most likely, at least for a few years (and the "middle" would have been ridiculous scifi a decade back). A world where we loudly proclaim our purity while quietly outsourcing the heavy lifting to the machine. We'll ban AI art while using AI to build the ban-hammer. We'll mock the "slop" while reading AI summaries of the news. We'll claim superiority over the machine right up until the moment it politely corrects our Gandhi quotes and writes the Linux kernel better than we can.
I used to think my willingness to embrace these tools gave me an edge, a way to stay ahead of the curve. Now I suspect it just means I'll be the first one to realize when the curve has become a vertical wall.
Thanks!
I feel like someone might have answered this already, but I'm too lazy to look it up:
As someone who is curious about Gundam, where do I start?
I've always raised an eyebrow at this advice. Speaking for myself, I've never felt that photography distracted me from being "in the moment." If I'm visiting a nice place, I'm going to whip out my phone, take as many photos as I please, and then use my Mk. 1 human eyeballs. I don't perceive events entirely through a viewfinder.
And I notice that my memory of events is significantly enhanced by photos. I have forgotten a ton of things until I've seen a picture that either brought back memories or let me reconstruct them.
You would have to have a very pathological attachment to a camera for taking photos at the frequency of a normal 21st century human to be detrimental.
You need a psychiatrist. I am only two-thirds of one, but fortunately for you, I've got exams and that means actually reading some of the papers.
(Please see an actual psychiatrist)
The choice of initial antidepressant is often a tossup between adherence to official guidelines, clinical judgements based on activity profile and potential side effects, and a dialogue with the patient.
In short? It is usually not very helpful to worry too hard about the first drug. They're roughly equally effective (and where one is superior, it's by a very slim margin) But in certain situations:
- Can't sleep? Lost appetite? Mirtazapine
- Too sleepy? Already gaining weight? Absolutely not mirtazapine, consider bupropion or vortioxetine
- Afraid of sexual side effects? Bupropion or vortioexetine again, mirtazapine too
- Tried an SSRI and it didn't help? It's better to try a different class of antidepressant instead of just another SSRI, and so on.
(But before the meds, a physical checkup is mandatory, as are investigations to rule out medical causes. You're going to feel depressed if your thyroid isn't working, or if you've got Cushing's.)
- Antidepressants work. They beat placebo, but not by a massive margin.
- Effects are synergistic with therapy.
Unfortunately, you haven't given me enough information to make an informed choice. I'd need to know about the severity of your depression, graded based on symptoms, lifestyle, overall health and a bunch of other things. Hopefully your actual doctor will do their due diligence.
I would be the last person to claim that conscientiousness is unimportant. ADHD sucks.
But I can take a pill to improve my conscientiousness, and I can't take one that moves my IQ in a positive direction. So it is not nearly as harsh a constraint.
- Prev
- Next

Nano Banana or GPT Image are perfectly capable of ingesting reference images of entirely novel characters, and then just placing them idomatically in an entirely new context. It's as simple as uploading the image(s) and asking it to transfer the character over. In the old days of 2023, you'd have to futz around fine-tuning Stable Diffusion to get far worse results.
More options
Context Copy link