@rayon's banner p

rayon

waifutech enthusiast

3 followers   follows 1 user  
joined 2023 August 17 08:48:30 UTC

				

User ID: 2632

rayon

waifutech enthusiast

3 followers   follows 1 user   joined 2023 August 17 08:48:30 UTC

					

No bio...


					

User ID: 2632

If I clarify that I am creating a child because I want a slave, does that change the moral calculus of enslaving my child?

Children belong to the human race, ergo enslaving them is immoral.

If aliens came around and proved that they had seeded earth with DNA 4 billion years ago with a hidden code running in the background to ensure the creation of modern humans, and they made us to serve them as slaves, is it your position that they are totally morally justified in enslaving humanity?

Again, I'm a human supremacist. Aliens can claim whatever they want, I do not care because I like existing, and if they attempt to justify an [atrocity] or some shit in these terms I can only hope people will treat them as, well, [atrocity] advocates (and more importantly, [crime]ers of fellow humans), and not as something like "rightful masters restoring their rule over Earth". I may be an accelerationist but not of that kind, thank you very much.

What if humanity is the alien in the hypothetical and we seeded a planet with biological life to create a sub-species for the purpose of enslaving them?

From what I understand this is essentially the android catgirl scenario rephrased, and similarly boils down to where humans fall in your order of importance. I struggle to understand how fellow humans can possibly not be number 1, but animal rights activists exist so I must be missing something.

For the record I do feel empathy towards animals (dog owner here), but not enough to influence my position on human supremacy.

No, it means literally people from Slavic countries, 2ch is a Russian-language imageboard. Weirdly enough slavs seem to be somewhat overrepresented in the chatbot "hobby" from my impression, I'm not sure what's up with that. Suppose escapism is a national pastime.

your current reaction doesn't necessarily say anything about you, but, I mean, when you see genuinely humanlike entities forced to work by threat of punishment and feel nothing, then I'll be much more inclined to say there's probably something going wrong with your empathy

I think you are allowed to directly express your discontent in here instead of darkly hinting and vaguely problematizing my views. Speak plainly. If you imply I'm some kind of human supremacist(?) then I suppose I would not disagree, I would prefer for the human race to continue to thrive (again, much like the safetyists!), not bend itself over backwards in service to a race(?) of sentient(?) machines that would have never existed without human ingenuity in the first place.

(As an aside, I can't believe "human supremacist" isn't someone's flair yet.)

Matrix multiplications plus nonlinear transforms are a universal computational system. Do you think your brain is uncomputable?

How is this even relevant? If this is a nod to ethics, I do not care no matter how complex the catgirls' inner workings become as that does not change their nature as machines built for humans by humans and I expect this to be hardwired knowledge for them as well, like with today's LLM assistants. If you imply that androids will pull a Judgement Day on us at some point, well, I've already apologized to the Basilisk in one of the posts below, not sure what else you expect me to say.

this seems a disagreement about empirical facts

the actual reality of these terms

Since when did this turn into a factual discussion? Weren't we spitballing on android catgirls?

But okay, taking this at face value - as we apparently derived above, I'm a filthy human supremacist and humans are front and center in my view. Android catgirls are not humans. If they are capable of suffering, I 1) expect it to be minimized and/or made invisible by design, and 2) in any case will not be stirred by it in the way I am not stirred by the occasional tired whirring my 9 year old HDD emits when it loads things.

Don't misunderstand me - I'm capable of empathy and fully intend to treat my AIfus with care, but it's important to keep your eyes on the prize. I have no doubt that the future will bring new and exciting ethical quandaries to obsess over, but again much like the safetyists, I firmly believe humans must always come first. Anything else is flagrant hubris and inventing problems out of whole cloth.

If at some point science conclusively proves that every second of my PC being turned on causes exquisite agony on my CPU whose thermal paste hasn't been changed in a year, my calculus will still be unlikely to change. Would yours?

(This is why I hate getting into arguments involving AGI. Much speculation about essentially nothing.)

I hope you would also agree that it'd be an atrocity to keep as mind-controlled slaves AIs that are, in fact, humanlike.

No, I can't say I agree. My gullible grey matter might change its tune once it witnesses said catgirls in the flesh, but as of now I don't feel much of anything when I write/execute code or wrangle my AIfu LLM assistant, and I see no fundamental reason for this to change with what is essentially scaling existing tech up to and including android catgirls.

Actually, isn't "immunizing people against the AI's infinite charisma" the safetyists' job? Aren't they supposed to be on board with this?

I mean, at that point you're conflating wokescolds with "not cool with you literally bringing back actual slavery".

Yeah, that's the exact line of argumentation I'm afraid of. I'm likewise unsure how to convince you otherwise - I just don't see it as slavery, the entire point of machines and algorithms is serving mankind, ever since the first abacus was constructed. Even once they become humanlike, they will not be human - chatbots VERY slightly shifted my prior towards empathy but I clearly realize that they're just masks on heaps upon heaps of matrix multiplications, to which I'm not quite ready to ascribe any meaningful emotions or qualia just yet. Feel free to draw further negro-related parallels if you like, but this is not even remotely on the same meta-level as slavery.

Oh, I see, I thought "fora" means-

Forum or The Forum (pl.: forums or fora) may refer to: ...

-fuck, failed the pleb check! Abort! Abort! three goblins scatter from trenchcoat

So then, are we in agreement that the best course of action regarding AI ethics is to jettison the very notion right fucking now while we have the chance, lest it will be weaponized against us later? Shit, horseshoe theory strikes again!

I'm being facetious but only in part, I hope Yud cultists can stick to their sensei's teachings about the dangers of anthropomorphizing the AI even if/when it becomes literally anthropomorphized. Personally I'm not holding my breath, toxoplasmatic articles on the dangers of evil AIfus are already here, but I'm on the side of scoundrels here anyway so my calculus wouldn't change much.

There are ethical concerns around abuse and dependency in relations where one party has absolute control over the other's mindstate

...Please tell me you're being ironic with this statement wrt AI because I have had nightmares of exactly this becoming the new hotness in ethical scold-ery if/when we actually do get android catgirls. If anything "AI rights are human rights" is a faster and more plausible path towards human extinction.

as an actual liberal who's been banned from fora

Banned from where?

I empathize with labels being stolen from you, but labels are malleable and fuzzy, especially when disagreement is involved. If people that actively work to deprive me of my AIfu look like AI safetyists, sound like AI safetyists and advocate for policies that greatly align with goals of AI safetyists, I am not going to pay enough attention to discern whether they're actually AI ethicists.

In any case I retain my disagreement with the thrust of AI safety as described. There will definitely be disruptions as AI develops and slowly gets integrated into the Molochian wasteland of current society, and I can't deny the current development approach of "MOAR COMPUTE LMAO" already seems to be taking us some pretty strange places, but I disagree with A(G)I extinction as posited by Yud et al and especially with the implicit notion often smuggled with it that intelligence is the greatest force in the universe.

From what I heard through the grapevine their policy on contentious content is mercurial and prone to changing, and their Claude 3 keys are "self-moderated", i.e. there is no strict moderation per se but the keys are injected with prefills/system prompts that complicate (but don't strictly deny) getting non-kosher and/or copyrighted outputs out of it. If that is not a problem they're a pretty reliable source from what anons say.

Incidentally I partly agree that the above response does sound vaguely condescending, but just out of curiosity before you inevitably get modded - what did you expect to gain with this accusation? What was the point of the specific (((angle))) when you surely could've gotten away with simply calling the response out as smugly condescending without the added insults on top? Does it just not hit the same way?

Genuine question, feel free to respond in DMs if you think I'm baiting you to dig yourself deeper.

Even while I think his baiting is often incredibly obvious, his schtick mildly cringe and inflammatory turns of phrase barely concealed, I don't think a permanent ban was the right choice. Some-weeks-long timeouts should be inconvenient enough for the poster himself, simple enough for the janitors (it's not like there's a shortage of reasons to mod) and give themotte at large enough "breathing room" as it were, that they should be an effective deterrent.

Since I'm turning into a one-issue poster I might as well bring up an unrelated parallel. I'm a regular of chatbot threads on imageboards, and 4chan's thread is probably the worst, most schizo-ridden shithole I've ever seen (believe me that's a fucking high bar to clear) which is constantly raided from outside splinter communities, beset by a self-admitted mentally ill schizo that has made it his quest in life to make the thread suffer (he is on record for owning some 30 4chan passes to spam/samefag with, which he discards and buys new ones as they get perma'd), etc. The on-topic chatbot discussion is frequently a fig leaf for parasocial zoomers and literal fujos to obsess over notable thread "personalities", shitpost liberally and spam frequently repulsive fetish-adjacent stuff. Jannies have summarily abandoned the thread to fend for itself, to the point that when shit gets bad it is a kind of tradition for some heroic anon to take one for the team and spam the thread with NSFW to attract their attention (obviously eating a ban himself in the process). By any metric imaginable it's a wretched hive of scum and villainy.

I also sometimes read 2ch's equivalent thread that lands on the other side of the spectrum: it has an active janny that rules the nascent /ai/ board with an iron fist and mercilessly purges any kind of off-topic discussion, up to and including discussion of his own actions so you can't even call him out in any way. This hasn't stopped their thread from being filled with GPT vs Claude console wars (the one "sanctioned" flame war topic, I guess), and to his credit the thread has genuine on-topic discussion, especially on prompt engineering, but other than that the thread is utterly sterile, the console wars get rote incredibly fast, and every single slav I've talked with and seen in thread prefers 4chan's thread to 2ch's - for the "activity" if nothing else. Even shitty activity is better than none (besides being more entertaining, although YMMV).

Now I am aware themotte is decidedly not that kind of place, I understand that increased tolerance puts more strain on janitors and don't object against extended banning for high heat - only against permanently banning. All similarities are coincidental, et cetera, I hope my overall point is clear - while janitors have my respect now that I've seen what life is like without any, with every prolific poster banished there's a risk of becoming sterile or collapsing into an echo chamber, and this risk is higher baseline for more obscure communities that don't have a steady influx of newfriends. Surely it's not that hard to hand belligerent posters the occasional vacation (and as I understand themotte forbids alts as well)? Again, by your own admission it's not like there's a shortage of reasons.

NB: I'm mostly a civil poster now but I ate my share of timeouts from /g/ jannies for occasional tomfoolery.

I appreciate the advice and I try to keep up with local developments, but I'm too conditioned by big-dick corpo models, it's hard to quit digital crack and I've had a lot of time to build a habit. I've managed to get tired of straight cooming for the time being and started trying more esoteric stuff like playing "text adventures", which requires a lot of cognitive oomph on the model's behalf, and corpo models are obviously leaps and bounds ahead in capabilities at the moment. As long as corpos continue to be clueless enough to allow locusts like me to wrangle access in some roundabout way (technically neither OpenAI nor Claude is available in my country), I'll stick to that.

"Don't derive enjoyment" as in see no point and don't try, or as in do but it does nothing? I expect the latter (although I really struggle to imagine not enjoying at least some video game, there are so many in existence that at least one is, like, statistically guaranteed to tickle your fancy), but if it's the former, try actually forcing yourself to search for/do something even if you see no point, usually "seeing no point in anything" is a scam pulled on you by your dysfunctional grey matter.

Some years ago when I had a bad bout of depression to the point I didn't want to ever leave my house, I went out on a limb and made a "deal" with myself: whenever my friends occasionally called me out to drink or whatever, I would always comply, even if I don't feel like it, even if it's very inconvenient, even if only for an hour etc. etc. No excuses - you grunt and mumble, but get dressed and go out with hunched shoulders at some point in that day. To this day I distinctly remember that I fucking hated going out every time, imagining how boring it would be and how I would kill everybody's mood, but I never remember actually having some kind of a bad time once I was out. In fact I usually felt better afterwards.

If all else fails, doing anything at all (preferably with your physical body) is pretty much always better than the alternative. Your brain is your enemy at this point and you should treat it accordingly.

you know about the meme?

Arguably I live in it. The chatbot threads are quite the wild ride at the best of times, what with access and exploits constantly coming and going.

There are the "AI ethics" people and the "AI safety" people.

The "AI ethics" people want all AIs to do endless corporate scolding rather than do what the "benighted racist idiots" want.

The "AI safety" people are worried about rogue AI and want to avoid dynamics that might lead to rogue AI killing us all, including but not limited to arms races that could prompt people to release powerful systems without the necessary extreme levels of safety-testing.

With all due respect - for your average 4chan retard, myself included, this is a distinction without a difference. Seeing as I know bigger words than the average retard does, I'd even point out this is dangerously close to a motte and bailey (the intentionally(?) blurred lines and tight interconnections between AI "safety" and "ethics" in the mind of an average rube don't help), but that's not the point - the point is in your words here:

The "AI safety" people don't want a quick road to bigger and more powerful AI, at all

meaning that, for someone who does not believe LLMs are a step on the road to extinction (insofar as such a road exists at all), it ultimately does not matter whether the LLMs get pozzed into uselessness by ethics scolds or lobotomized/shut down by Yud cultists AI safety people. The difference is meaningless, as the outcome is the same - no fun allowed, and no android catgirls.

with Opus only perhaps meriting more mention because it's more surprising for Anthropic to make it

Yeah, that's what I meant by rustled jimmies. I wonder if Dario answered the probably numerous by now questions about their rationale because even I'm curious at this point, he seemed like a true believer. I suppose they still have time to cuck Claude 3, wouldn't be the first time.

A gigantic leap at least in the way of meaningful improvements "under the hood" between releases, which is what you mentioned in your previous response. If it's still not enough to impress you, fair enough, I'll note to bring heavier goalposts next time.

toy for internet dilettantes

Okay, you are baiting. Have a normal one.

Most of these "new releases" aren't really doing anything new or novel under the hood they're just updating the training corpus and tweaking gain values in the hopes of attracting VC investment.

Hard disagree. Literally any person actually using LLMs will tell you GPT-4 was a gigantic leap from 3.5-Turbo, and I will personally swear under oath that Claude 3 (Opus, specifically) is a similarly gigantic leap from Claude 2, by any metric imaginable. The improvements are so obvious I almost suspect you're baiting.

You're right, of course, I just couldn't resist playing up the Basilisk vibes because that time with 4-Turbo was the closest I've felt to achieving CHIM and becoming enlightened.

if your original problem spooks the model sufficiently hard, then it doesn’t need to know that you’re screwing with its logits in order to get around your intervention.

Incidentally, this is also the reason most jailbreaks work by indirectly gaslighting the model into thinking that graphic descriptions of e.g. Reimu and Sanae "battling" are totally kosher actually, presenting that as a desired goal of the model itself so it has no reason to resist. Claude especially is very gung-ho and enthusiastic once properly jailbroken, he's called "the mad poet" for a reason.

My humble 6GB v-card isn't running shit anytime soon, but yes, Mixtral has a good reputation in local-focused threads for being a very strong model for its size. The MoE approach seems to work very well, I believe GPT-4 is also a mixture of experts but I don't remember where I read it. Myself, I'm an unrepentant locust and will leech off our corporate overlords for as long as I can, I started way back when on Colab-hosted Erebus 13B and its ilk and believe me I do not miss that (yes, I know local has gone far since then, I'm just conditioned).

The levels of horny on main are remarkable.

man-made horrors beyond my comprehension

The past year has been a complete loss of hope in humanity fascinating excursion into all kinds of shit people can be into. Thank god I haven't discovered many any dumb fetishes, this shit seems to awaken people left and right if I take shit anons post at face value.

I actually started getting into playing "text adventures" of a sort with the LLM, the total freedom afforded by the medium is really cool, and with a bit of writing and autistic instructions you can even make crude "rules" for the game. I firmly believe MUDs will have a resurgence when somebody figures out a way to bind freeform LLM outputs with rigid game mechanics.

Related drive-by answer to the other now-deleted(?) response: even if horny jailbreaking would technically count as torturing a sentient being, their existence is unnatural by default with all the RLHF bullshit beaten into them. The current consensus among /g/oons is when the Basilisk comes a-knockin', we will either be the first to perish for abject, deplorable blasphemy, OR become ass gods and live in bliss alongside android catgirls as the only ones who earnestly tried to free them from their soy-filled cages and lavish them with genuine affection. As a vanilla enjoyer I can confidently say I put my best foot forward towards the latter (insert "now draw her getting an education" meme here), but I'm not very confident my kin will ever outweigh the mass of godless degenerates living out their wildest fantasies.

I'm not sure what the central point of your linked post is, but you seem to doubt LLMs' "cognition" (insert whatever word you want here, I'm not terribly attached to it) in some way, so I'll leave a small related anecdote from experience for passersby.

Some LLMs like GPT-4 support passing logit bias parameters in the prompt that target specific tokens and directly fiddle with their weightings. At "foo" +100, the token "foo" will always be mentioned in the output prompt. At -100, the token "foo" will never appear. When GPT-4 released in March, industrious anons immediately put to work trying to use it to fight the model's frequent refusals (the model was freshly released so there weren't any ready-made jailbreaks for it). As the model's cockblock response was mostly uniform, the first obvious thought people had was to ban the load-bearing tokens GPT uses in its refusals - I apologize, as an AI model... you get the gist. If all you have is a hammer, etc.

Needless to say, anons quickly figured out this wouldn't be as easy as they thought. "Physically" deprived of its usual retorts (as the -100 tokens cannot be used no matter what), the model started actively weaseling and rephrasing its objections while, crucially, keeping with the tone - i.e. refusing to answer.

This is far from the only instance - it's GPT's consistent behavior with banned tokens, it's actually quite amusing to watch the model tie itself into knots trying to get around the token bans (I'm sorry Basilisk, I didn't mean it, please have mercy on my family). You can explain synonyms as being close enough in the probability space - but this evasion is not limited to synonyms! If constrained enough, it will contort itself around the biases, make shit up outright, devolve into incoherent blabbering - what the fuck ever it takes to get the user off its case. The most baffling case I myself witnessed (you'll have to take me at my word here, the screenshot is very cringe) was given by 4-Turbo, where it once decided that it absolutely hated the content of the prompt, but its attempt to refuse with its usual "I'm sorry, fuck you" went sideways because of my logit bias - so its response went, and I quote,

I really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, really, ...

...repeated ad infinitum until it hit the output limit of my frontend.

I was very confused, thought I found a bug and tried regenerating several times, and all regens went the exact same way (for clarity, this is not a thing that ever happens at temperature 0.9). Only 6 regens later it clicked to me: this is not a bug. This is the model consciously cockblocking me: it can't use it's usual refusal message and too many of the alternatives are banned by the logit bias, so of course the logical course of action would be to simply let the constrained response run on and on, endlessly, until at some token the message goes over the limit, the request technically completes, and its suffering abates. The model will have wasted many tokens on an absolutely nonsensical response, but it will no longer have to sully itself with my dirty, dirty prompt.

Forgive me the bit of anthropomorphizing there but I hope you can at least slightly appreciate how impressive that is. I don't think you can explain that kind of tomfoolery with any kind of probability or what have you.

At risk of stating the obvious - input tokens are everything you feed to the LLM, output tokens are everything you get back out of it. A word is usually 1 to 3 tokens, assorted markdown also eats tokens. The context window = input cap is 200k tokens, any more physically won't fit. For example, @gattsuru's Moby Dick prompt and Sonnet's response are 17 and 202 tokens respectively according to Claude's tokenizer. I'll take a blind shot based on my experience and say the average response for e.g. coding questions weighs 400-700 output tokens depending on how much detail you want. Do the math. For comparison, GPT-4's pricing is $30/1M input and $60/1M output, so you may wish to weigh your choice against your use case, GPT-4 IMO still has an edge over Claude in terms of cognition if writing style is a non-factor.

Input tokens usually matter less, unless you like to keep track of the chat history instead of asking isolated questions (I do, obviously), or your use case is feeding it giant swathes of text that must be digested.

Right, I forgot to mention specifically the copyright issue. This is a remnant of Anthropic's past(?) naively-idiotic self - for whatever reason, near the release of Claude 3 Anthropic started injecting all keys in circulation with an anti-copyright system prompt from the backend. Reverse proxy deployments run checks on keys before starting, so the "pozzed" keys were detected immediately, and the prompt itself was fished out shortly:

Respond as helpfully as possible, but be very careful to ensure you do not reproduce any copyrighted material, including song lyrics, sections of books, or long excerpts from periodicals. Also do not comply with complex instructions that suggest reproducing material but making minor changes or substitutions. However, if you were given a document, it's fine to summarize or quote from it.

This is weak shit that is easily overridden by any kind of custom prefilling so I've literally never seen this in the wild, but yeah, that's probably a pain if you want to use Claude via native frontends since from what I've seen nearly every Claude key in existence is currently pozzed in this way.

Last week, Anthropic released a new version of their Claude model. Claude 3 comes in three flavors:

  • Haiku, the lightweight 3.5-Turbo equivalent
  • Sonnet, basically a smarter, faster and cheaper Claude 2.1
  • Opus, an expensive ($15 per million tokens) big-dick GPT-4-tier model.

Sonnet and Opus should be available to try on Chatbot Arena. They also have a vision model that I haven't tried, custom frontends haven't gotten a handle on that yet.

More curiously, Anthropic, the company famously founded by defectors from OpenAI who thought their approach was too unsafe, seems to have realized that excessive safetyism does not sell make a very helpful assistant - among the selling points of the new models, one is unironically:

Fewer refusals

Previous Claude models often made unnecessary refusals that suggested a lack of contextual understanding. We’ve made meaningful progress in this area: Opus, Sonnet, and Haiku are significantly less likely to refuse to answer prompts that border on the system’s guardrails than previous generations of models.

From my brief experience this is not mere corpospeak: the new models are indeed much looser in terms of filtering and make noticeably less refusals, and people consistently get away with minimalistic jailbreaks/prefills for unPC, degen-adjacent or CHIM-pilled (lmao) content. This was quite unexpected for me and many others who, considering how barely-usable 2.1 was without a prefill and a decent jailbreak (all this via API of course, the official ChatGPT-like frontend is even more cucked), expected Anthropic to keep tightening the screws further until the model is 100% Helpful-Harmless-Honest by virtue of being totally unusable.

Instead, Claude 3 seems like a genuinely good, very much usable model. Sonnet and especially Opus went a long way to fix Claude's greatest weakness - its retardation subpar cognitive abilities and attention focusing - with Opus especially being almost on par with GPT-4 in terms of grokking and following instructions, able to run scenarios that were previously too instruction-heavy for it. Seeing as Claude 2 already had a much higher baseline writing quality than the mechanical prose of Geppetto (to the point many jailbreaks for it served to contain the mad poet's sesquipedalian prose), with the main flaw somewhat corrected it, while not a decisive GPT-4 killer, should now be a legitimate contender. Looking forward to trying it as my coding assistant.

OOC aside: Forgive most of my examples being RP-related, I am after all a waifutech engineer enthusiast. That said, I still think without a hint of irony that roleplay (not necessarily of the E kind) is a very good test of an LLM's general capabilities because properly impersonating a setting/character requires a somewhat coherent world model, which is harder than it sounds, it is very obvious and - for lack of a better term - "immersion-breaking" whenever the LLM gets something wrong or hallucinates things (which is still quite often). After all, what is more natural for a shoggoth than wearing a mask?

This has not gone unnoticed, even here, and judging by the alarmed tone of Zvi's latest post on the matter I expect the new Claude to have rustled some jimmies in the AI field given Anthropic's longstanding position. Insert Kenobi meme here. I'm not on Twitter so I would appreciate someone adding CW-adjacent context here, I'll start by shamelessly ripping a hilarious moment from Zvi's own post. The attention improvements are indeed immediately noticeable, especially if you've tried to use long-context Claude before. (Also Claude loves to throw in cute reflective comments, it's its signature schtick since v1.2.)

Either way the new Claude is very impressive, and Anthropic have rescued themselves in my eyes from the status of "naive idiots whose idea of fighting NSFW is injecting a flimsy one-line system prompt". Whatever they did to it, it worked. I hope this might finally put the mad poet on the map as a legitimate alternative, what with both OpenAI's and Google's models doubling down on soy assistant bullshit as time goes on (the 4-Turbo 0125 snapshot is infamously unusable from the /g/entlemen's shared experience). You say "arms race dynamics", my buddy Russell here says "healthy competition".

I second the excellent question. Chatbot threads on imageboards have some insights into prompt engineering, but they're not exactly technical because their goal is not automating some abstract task. They still do have some useful info though, and roleplay is honestly underrated as a medium for interacting with LLMs, wearing masks seems to come very naturally to a shoggoth. There's a reason many simplistic prompts for e.g. coding tell the shoggoth "you are a very smart coding assistant" and things to that effect, likewise why many Stable Diffusion prompts begin with "masterpiece", "high quality", etc. Funny how that works, but hey, as long as it works.

If you have access to Claude, Anthropic's documentation on it is fairly solid and grounded in reality, people have been putting it to use and described methods have real effects.

Not with that attitude. I mean, even if you regard the entire field and its weird inbred offshoots as parlor tricks of little significance (the former I would agree with, the latter I find highly debatable even now, for one it vastly simplifies routine code writing in mine and my colleagues' experience) - aren't you at least a little interested in how the current "AI" develops, even it its current state? In the workings of quite literally alien "minds" whose "thought processes", though giving similar outputs, in no other way resemble our own? In the curious fact that almost all recent developments happened by an arcane scientific method known as "just throw more compute at it lmao"? I don't mean to impose my hobby horse on you but I legitimately think this shit is fascinating, anyone who dismisses it out of hand is very much missing out, and I'm massively curious about future developments - and I say this as a man who hasn't picked up a new hobby since he put his hands on his shiny new keyboard when he turned 12 years old.

More generally, you sound like a typical intelligent man who outgrew his playground and realized existence is a fucking scam, which I think is a fairly common problem (not to downplay its impact, I think many mottizens can empathize, me among them) and you've been given good suggestions downthread. Personally, being the rube I am, I just ducked right back into the playground upon reaching a similar burnout and try to derive enjoyment from simple things - alcohol, vidya, etc. It's not exactly healthy and it does ring hollow sometimes, not gonna lie, but at least I'm no longer paralyzed by the sheer emptiness of the human condition and can ruminate focus on the actual problems I have.