It's a bit of a distraction and cjet79's willing to accept the standard story, but I'll caveat that the evidence is a lot weaker than common knowledge suggests. There's been a sizable number of countries that have either brought about new firearms regulation, or clamped down significantly harder, in the post-1980 range where we have pretty good statistics. That's part of why the Australia example always bugs me: the decrease in total suicides didn't actually stay and there were increases in non-firearms suicide.
((Is this in contradiction to the one case of gas ovens? Sure! ... but exactly how sure are we that pre-1970s 'suicides' were all intentional suicide?))
As far as I know, Grok 4 isn't even public enough to try to run it on consumer hardware, but Grok 1 was the last publicly released model weights, and they remain pretty hard to run locally even with hefty quantization.
You're definitely stuck with the 'not your weights, not your waifu muse' problem.
(Mostly) uncensored LLMs have been around for a while, but most of the first generations struggled very badly when writing more than a couple paragraphs at a time -- very prone to throwing in random new characters, looping events, physical inconsistencies, so on. That's part of why so many early tools focused on character-based roleplay; LLAMA might go off the deep end, but if you're expecting to direct it back toward your goal and it's not too disruptive to have it 'reroll' if it goes completely off the rails.
More recent tools, including pretty much all of DeepSeek's models, can handle short fiction, but either are censored, have an uncensored model but most web interfaces are censored, and/or can't be run at reasonable speeds on consumer-level hardware. That's why your link spells out what steps to introduce a jailbreak. Those jailbreaks can usually break out of some censorship (until they're countered) but at the cost of often making the models increasingly unhinged or incoherent, and they're also just a pain because of token limits. And there's an argument that some of this censorship breaks the models in weird ways, and that might persist even when a prompt is jailbroken.
((Despite that, at least in furry circles more of the recommendations have just been to use DeepSeek through some of the less-heavily-censored providers.))
By contrast, Grok3 and 4 will just do it. Upload a file with some setting, character, and tone information (cw: furry nsfw 'lore', in the magical realm sense, implications of M/M, M/F, and M/tM), give it a one-sentence description of the scene you want and some tags, and it'll quite happy throw out a thousand-plus words, following almost all the constraints I gave it, and having a clear rising action and climax (hurr hurr). It can set up part of the scene in the start of a work and then call back to it a couple hundred words later, without confusing details, and there's some obvious logical paradoxes that it handles reasonably well.
You can get output without em-dashes! It even managed a couple setting-appropriate turns of phrase that don't show up on google and are surprisingly coherent to the characters it did make (eg "Survive? Sure. Thrive? That’s on you, pup" isn't anything to write home about, but aiming it at a male gray wolf working in an idealized service sector job the day of a rush is pretty fitting).
It's still not great or even good writing, even grading on the massive curve that is smut. It's unsurprising that it fails to stand up to real greats like Rukis Croax or Robert Baird, or can't read my mind about what the characters 'should' be like, or isn't anything like the story I did write for the same setting and prompt, or doesn't know specialized names for kinks. The character tones are a little too samey, the pacing is entirely wrong for smut aimed at men and way too fast for anything aimed at women and finishes too quickly, it keeps talking about eye colors in a way that come across as Mary Sue for my demographic, it's way too omniscient a viewpoint, and it either doesn't understand how to properly describe a character to demonstrate attraction from the viewpoint character or doesn't realize that it should do so as part of written smut.
((It also can't count; I haven't had much luck getting more than 1.5k words per prompt, and Grok4 will insist that it got a requested three thousand words, and it definitely was struggling even more with the pace and paragraph formatting as it got toward that point. I haven't messed around with it having it write full stories much before, though, and part of the weirdness is probably my style recommendations.))
I dunno how many of those problems are things I just need to prompt it better, and how many are things that it can't fix even if prompted, or that could be fixed with better prompting but I don't have the words to actually write down. But they're the sorta problems that weren't anywhere close to my 'showstopper list' just a couple years ago.
And they'll do it in a couple minutes.... For as long as you trust xAI.
See here for a SFW (or at least not-smut) example with and without em-dashes. Some NSFW outputs are available on request, but it's bi furry smut, so it's probably not going to be interesting or even readable to most people here.
((though I'll caveat for fairly vanilla stuff chatGPT works and actually does a bit better with character speech, and sometimes even offers to make it erotic, if not necessarily in-line with the characters I gave it. But try to get the smut part and it fades-to-black or drops a euphemism for the actual sex scene.))
Starting in 2000-2004, the furry fandom became visible to people outside of the fandom to a much greater degree than before. This also coincided with a lot of social media sites with a large focus on pointing out and sneering at people they thought were weird, with the most famous being SomethingAwful.
This went about as well as a house on fire.
I don't know much about the SA-internal side of things, outside of there just being several purges of people suspected of being furs from their forums (tbf, SA purges people from the forums as a fundraising effort, or just because Lowtax thought it was funny). But from the furry side, it was pretty common for fairly small furry spaces to just randomly get swarmed by twenty trolls out of the blue.
Some of this was tongue-in-cheek or self-deprecating. But a lot of it was just point-and-look-and-the-weirdos, and sometimes surprisingly mainstream. There's a Daily Show skit called To Boldly Gay where furries were a good part of the punchline, and I'm not going to link it because it didn't censor the smut sketch it was making fun of, and that was cable television. CSI's Fur and Loathing is probably the most infamous.
(Sexual politics of the time, given the broad gay-or-gay-adjacent bits of the fandom, probably had an impact, too.)
One of the joking-not-joking responses is that while the media reporting was probably just the standard Jerry Springer stuff, the trolls, at least, sure seemed to spend a lot of time and attention scrolling through art or writing that supposedly made them violently ill. And just like the then-prominent gay marriage debate proposed that the people most strongly opposed to gay marriage were really closeted, after a few high profile (if not very-well-proven) examples, a lot of furries took some cases of user overlap between CrushYiffDestroy (a furry-self-critical forum) and the SomethingAwful forums as evidence that many of others were really using the movement to summon their own army litigate disagreements in a more favorable environment under an alt.
With the caveat Rasmussen and N=1000.
Yeah. There's been a long-standing conflict between AAQCs not needing to be correct so long as they're positive contributions for the community. This at least looks like a serious if flawed attempt to discuss a complicated rather than active trolling, so it's far from the worst version of that issue, but the lack of engagement with even the most overt criticism of the most central claims makes it really frustrating.
At one side, he says those things LLMs can do are only "tricks, interesting and impressive moves that fall short of the massive changes the biggest firms in Silicon Valley are promising", at the other, he does specifically challenge whether AI "can translate, diagnose, teach, write poetry, code, etc." (and then chess, and saying that they have reasoning).
Dissolve the definitions, and what's left? Are LLMs competent if they can only do tricks that cause no massive changes? Are they incompetent if it only gets 95% of difficult test questions right and sometimes you have to swap models to deal with a new programming language? Would competence require 100% correctness on all possible questions in a field (literally, "The problem with hallucination is not the rate at which it happens but that it happens at all")?
I'm sure deBoer's trying to squeeze something out, but is there any space that Mounk would possibly agree with him, here? Not just in the question of what a specific real-world experiment's results would be, but even what a real-world experiment would need to look like?
That's probably not perfectly charitable -- I'll admit I really don't like deBoer, and there's probably a better discussion I could have about how his "labor adaptation, regulatory structure, political economy" actually goes if I didn't think the man lying. But I don't think it's a wrong claim, and I don't think it's an unfair criticism of the story he's trying to tell.
Raceplay is a little more specific than just liking specific race(s) in your porn -- it's usually a sub/dom sorta thing involving racial stereotypes or slurs at the low end, and the more controversial bits tend to get into things like racial slavery roleplay or people being 'corrupted' or 'converted' to a 'lesser' or 'superior' race.
((You'd think this isn't something that would have a furry equivalent, but surprise! ... still extremely marginal even over there, though.))
The Texas law hydroacetylene is mentioning is Texas HB1181, which puts some potentially high fines on commercial websites that provide more than 1/3 material that is "harmful to minors" and don't have age verification processes (or who don't put certain notices, though that prong is still on hold and unlikely to survive legal scrutiny). While there's some vagueness to how the math happens, the actual definition of 'harmful to minors' is pretty explicitly limited to nudity and sexual acts.
I don't like the law, and I am skeptical both in the "I don't think a sixteen-year-old is going to be hurt by seeing a boob" sense and "I'm not willing to burn down the commons over it" sense. It's certainly driven some censorship. But I don't think it's responsible for the examples people are using here.
Itchio readded search and recommendations for NSFW games that had been deindexed (if they are set as free). As far as I can tell, only a small number of games were completely removed from the service, but they've stayed removed for new purchase (or download):
It's certainly possible; even the bit where the Texas government swears in court that they won't bring these charges against those companies runs into the trouble where the Texas government includes Paxton. But it's even more common for people to panic when a country government has been sending nice letters informing them of their legal requirements and mentioning civil fines and criminal penalties.
And I'm skeptical that nVidia lacks lawyers who can read.
((I will admit one silver lining; we might get fewer NordVPN ads. But as tempting as that is, I'd rather keep my principles.))
With that aside, I'm not sure how other people see LLMs tackling problems of this complexity and then claim they're not reasoning.
Both the complexity, and also just the novelty. These LLMs were definitely trained with some stack-based languages, both historic like forth and modern assembly, but as similar as they might be from a conceptual perspective, the implementation philosophy and simple names are drastically different. And while there's a tiny number of HexCasting examples on the open web or on discord, but they universally take different approaches, and some of them (like my screenshot above) simply can't be read by any parser that doesn't already have a good understanding of the language: a completed Jester's Gambit, Rotation Gambit, and Rotation Gambit II are visually identical. And, of course, you don't have to write a spell from top-left-going-right-then-down like an English paragraph.
That's a fun example because it's got an easy third-party evaluation, but you don't have to make up a programming language to do this sorta thing. These problems don't have to be hard, just be being new they undermine a lot of these arguments, and these writers could run them, and just don't seem interested in it.
I'm not sure why you're ?paying for Grok 4. Grok 3 was genuinely impressive, and somewhat noticeably better than the competition at launch. Not the case here I'm afraid.
Yeah, I'm a little surprised. I didn't expect any of the LLMs to handle this great -- I'm kinda amazed that ChatGPT could do as well as it did, and even some of my gripes are probably downstream of the question being underspecified -- but the level of hallucination from Grok4 is disappointing, and I could see the arguments for dropping it.
Partly a politics and work politics thing. My boss is a big booster of everything Musk, so there's been a lot more value, separate from the LLM's specific capabilities, in both knowing the thing and knowing the limits of the Grok4. And while I don't particularly trust xAI, I neither trust nor like OpenAI.
Some of it's use case. I do find even Grok 3 more effective for writing, and reviewing writing than the ChatGPT equivalents. Compare this to this, or at the risk of touching on erwgv3g34's topic today, this to this (cw: discussion of an excerpt from nsfw m/m/f text; there's no actual sex or even nudity, but it's very very clearly smut.)
They're all sycophantic, and even where 4o is better at catching spelling and grammar checks, at larger consistency or coherence or theme questions I've had a hell of a time getting any of the early ChatGPT models to really push back with anything deeper than the Your First Writing Advice that amadanb's criticized.
Of course, I also haven't experimented that hard with them, or with the newer paid ChatGPT models. I probably do need to do a deeper and more serious evaluation; I've also just been lazy about actual hard comparisons for fields with strict performance results. My work programming goes into stuff that I'm either unwilling to upload to an outside service or is large enough in scale that models have had problems maintaining logic (or both), while my hobby programming or teaching is mostly simple enough that Grok3 or 4mini can handle it.
The Texas law is bad, but it only applies where at least a third of content on a site is under the 'harmful to minors' banner. Even accepting for how poorly that calculation is defined, there's little chance it'd apply to sites like itch.io or x twitter, and zero of it applying to Steam. It wouldn't apply to payment processors at all.
Outright conversion's edgy enough that I can get why it triggers a lot of censorship, but even fairly soft orientation play tends to run headfirst into problems with other models. I'm not sure whether that's downstream of the political side of things, or just a software problem with categories.
(caveat: I'm absolutely not clicking that link, so I don't know and don't want to know how edgy that Eva ai output is.)
Which is kinda funny. On one hand, it is a really outlier kink (eg, 1.8k submissions on e621, <500 on AO3)... when spelled out. As a mere implication, though, it's endemic everywhere from gay4pay to girl-on-girl-plus-cameraman, and some forms are such a cliche in fanfic spaces (cw: tvtropes) that it's baked into even pretty mainstream fandom-originated works.
Patchett's an absolute blast. Hope you enjoy his books.
That's a bit weird an approach -- you're drawing 20 Hermes Gambits rather than having the code recurse, and the Gemini Decomposition → Reveal → Novice's Gambit
could be simplified to just Reveal
-- but it does work and fulfills the requirement. Can run it in this IDE if anyone wants, though you'll have to use the simplified version since Novice's Gambit
(and Bookkeeper's Gambit
) isn't supported there, but the exactly-as-chatGPT'd version does work in-game (albeit an absolute pain to draw without a Focus).
That's kinda impressive. Both Rotation Gambit II and Retrospection are things I'd expect LLMs to struggle with.
That's a fair point, and does seem to work with Grok, as does just giving it only one web page and requesting it to not use others. Still struggles, though.
That said, a lot of the logic 'thinking' steps are things like "The summary suggests list operations exist, but they're not fully listed due to cutoff.", getting confused by how Consideration/Introspection works (as start/end escape characters) or trying to recommend Concat Distillation, which doesn't exist but is a reasonable (indeed, the code) name for Speaker's Distillation. So it's possible I'm more running into issues with the way I'm asking the question, such that Grok's research tooling is preventing it from seeing the necessary parts of the puzzle to find the answer.
Most of Azad's slums wouldn't be out of place in an Ayn Rand novel, but the treatment of medical care is one of the big tells, especially for when and where Player of Games was written, as is the drone informing Gurgeh that "it all boils down to ownership, possession; about taking and having." That's not fundamentally leftist, but it's still also not how the red tribe equivalent would put things, or even universal among the left side of the branch (contrast, for example, Pratchett's "Evil starts when you begin to treat people as things").
Agreed that it's still pretty subtle and a fairly reasonable extrapolation of the technical assumptions Banks is making for the world he wants to build.
And oh, boy, do I have a take on Moore.
HexCasting is fun, if not very balanced.
It has a stack-based programming language system based on drawing Patterns onto your screen over a hex-style grid, where each Pattern either produces a single variable on the top of the stack, manipulates parts of the stack to perform certain operations, or act as an escape character, with one off-stack register (called the Ravenmind). You can keep the state of the grid and stack while not actively casting, but because the screen grid has limited space and the grid is wiped whenever the stack is empty (or on shift-right-click), there's some really interesting early-game constraints where quining a spell or doing goofy recursion allows some surprisingly powerful spells to be made much earlier than normal.
Eventually, you can craft the Focus and Spellbook items that can store more variables from the stack even if you wipe the grid, and then things go off the rails very quickly, though there remain some limits since most Patterns cost amethyst from your inventory (or, if you're out of amethyst and hit a certain unlock, HP).
Like most stack-based programming it tends to be a little prone to driving people crazy, which fits pretty heavily with the in-game lore for the magic.
That specific spell example just existed to show a bug in how the evaluator was calculating recursion limits. The dev intended to have a limit of 512 recursions, but had implemented two (normal) ways of recursive casting. Hermes' Gambit executes a single variable from the stack, and each Hermes' added one to the recursion limit as it was executed. Thoth's Gambit executes each variable from one list over a second list, and didn't count those multiplicatively. I think it was only adding one to the recursion for each variable in the second list? Since lists only took 1 + ListCount out of the stack's 1024 limit to the stack, you could conceivably hit a quarter-million recursions without getting to the normal block from the limit.
Psuedocode, it's about equivalent to :
double register = 1;
void function(double a)
{
double b = register;
print(b);
b += a;
register = b;
}
void main()
{
double max = Math.Pow(10,3);
double start = 1;
List inputs = new ArrayList(Collections.nCopies(max, start));
foreach(double val in inputs)
{
function(val);
}
}
Very ugly, but the language is intentionally constrained so you can't do a lot of easier approaches (eg, you have to declare 10^3 because the symbol for 1000 is so long it takes up most of the screen, you don't have normal for loops so that abomination of a list initialization is your new worst enemy best friend, every number is a double).
Not that big a deal when you're just printing to the screen, but since those could (more!) easily have been explosions or block/light placements or teleportations, it's a bit scary for server owners.
((In practice, even that simple counter would cause everyone to disconnect from a remote server. Go go manual forkbomb.))
For some other example spells, see a safe teleport, or a spell to place a series of temporary blocks the direction you're looking, or to mine five blocks from the face of the block you're looking at.
(Magical)PSI is a little easier to get into and served as the inspiration for HexCasting, but it has enough documentation on reddit that I can't confidently say it's LLM-training proof.
Huh. Uploading just the Patterns section of the HexBook webpage and disabling search on web looks better even on Grok3, though that's just a quick glance and I won't be able to test it for a bit.
EDIT: nope, several hallucinated patterns on Grok 3, including a number that break from the naming convention. And Grok4 can't have web search turned off. Bah.
I don't want to speak on 'intelligence' or genuine reasoning or heuristics and approximations, but when it comes to going outside the bounds of their training data, it's pretty trivially possible to take an LLM and give a problem related to a video game (or a mod for a video game) that was well outside of its knowledge cutoff or training date.
I can't test this right now, it's definitely not an optimal solution (see uploaded file for comparison), and I think it misinterpreted the Evanition operator, but it's a question that I'm pretty sure didn't have an equivalent on the public web anywhere until today. There's something damning in getting a trivial computer science problem either non-optimal or wrong, especially when given the total documentation, but there's also something interesting in getting one like this close at all with such minimum of information.
VRChat (and most other social virtual reality worlds) allow people to choose an avatar. At the novice user level, these avatars just track the camera position and orientation, provide a walking animation, and have a limited number of preset emotes, but there's a small but growing industry for extending that connection. Multiple IMUs and/or camera tricks can track limbs, and there are tools used by more dedicated users for face and eye and hand tracking. These can allow avatar's general pose (and sometimes down to finger motions) to match that of the real-world person driving it, sometimes with complex modeling going on where an avatar might need to represent body parts that the person driving it doesn't have.
While you can go into third-person mode to evaluate how well these pose estimates are working in some circumstances, that's impractical for a lot of in-game use, both for motion sickness reasons and because it's often disruptive. So most VRChat social worlds will have at least one virtual mirror, usually equivalent to at least a eight-foot-tall-by-sixteen-foot-wide space, very prominently placed to check things like imu drift.
Some people like these mirrors. Really like them. Like spend hours in front of them and then go to sleep while in VR-level like them. This can sometimes be a social thing where groups will sit in front of a mirror and even do some social discussions together, or sometimes they'll be the one constantly watching the mirror while everyone is else doing their own goofy stuff. But they're the mirror dwellers.
I'm not absolutely sure whatever's going on with them is bad, but it's definitely a break in behavior that was not really available ten years ago.
You used to get this sorta thing on ratsphere tumblr, where "rapture of the nerds" was so common as to be a cliche. I kinda wonder if deBoer's "imminent AI rupture" follows from that and he edited it, or if it's just a coincidence. There's a fun Bulverist analysis of why religion was the focus there and 'the primacy of material conditions' from deBoer, but that's even more of a distraction from the actual discussion matter.
There's a boring sense where it's kinda funny how bad deBoer is at this. I'll overlook the typos, because lord knows I make enough of those myself, but look at his actual central example, that he opens up his story around:
“The average age at diagnosis for Type II diabetes is 45 years. Will there still be people growing gradually older and getting Type II diabetes and taking insulin injections in 2070? If not, what are we even doing here?” That’s right folks: AI is coming so there’s no point in developing new medical technology. In less than a half-century, we may very well no longer be growing old.
There's a steelman of deBoer's argument, here. But the one he actually presented isn't engaging, in the very slightest, with what Scott is trying to bring up, or even with a strawman of what Scott was trying to bring up. What, exactly, does deBoer believe a cure to aging (or even just a treatment for diabetes, if we want to go all tech-hyper-optimism) would look like, if not new medical technology? What, exactly, does deBoer think of the actual problem of long-term commitment strategies in a rapidly changing environment?
Okay, deBoer doesn't care, and/or doesn't even recognize those things as questions. It's really just a springboard for I Hate Advocates For This Technology. Whatever extent he's engaging with the specific claims is just a tool to get to that point. Does he actually do his chores or eat his broccoli?
Well, no.
Mounk mocks the idea that AI is incompetent, noting that modern models can translate, diagnose, teach, write poetry, code, etc. For one thing, almost no one is arguing total LLM incompetence; there are some neat tricks that they can consistently pull off.
Ah, nobody makes that claim, r-
Whether AI can teach well has absolutely not been even meaningfully asked at necessary scale in the research record yet, let alone answered; five minutes of searching will reveal hundreds of coders lamenting AI’s shortcomings in real-world programming; machine translation is a challenge that has simply been asserted to be solved but which constantly falls apart in real-world communicative scenarios; I absolutely 100% dispute that AI poetry is any good, and anyway since it’s generated by a purely derivative process from human-written poetry, it isn’t creativity at all.
Okay, so 'nobody' includes the very person making this story.
It doesn’t matter what LLMs can do; the stochastic parrot critique is true because it accurately reflects how those systems work. LLMs don’t reason. There is no mental space in which reasoning could occur.
This isn't even a good technical understanding of how ChatGPT, as opposed to just the LLM, work, and even if I'm not willing to go as far as self_made_human for people raising the parrots critique here, I'm still pretty critical for it, but the more damning bit is where and deBoer is either unfamiliar with or choosing to ignore the many domains in favor of One Study Rando With A Chess Game. Will he change his mind if someone presents a chess-focused LLM with a high ELO score?
I could break into his examples and values a lot deeper -- the hallucination problem is actually a lot more interesting and complicated, questions of bias are usually just smuggling in 'doesn't agree with the writer's politics' but there are some genuine technical questions -- but if you locked the two of us in a room and only provided escape if we agreed I still don't think either of us would find discussing it with each other more interesting that talking to the walls. It's not just that we have different understandings of what we're debating; it's whether we're even trying to debate something that can be changed by actual changes in the real world.
Okay, deBoer isn't debating honestly. His claim about New York Times fact-checking everything is hilarious, but his link to a special issue that he literally claims "not a single line of real skepticism appears" and also has as its first headline "Everyone is Using AI for Everything. Is That Bad?" and includes the phrase "The mental model I sometimes have of these chatbots is as a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time". He tries to portray Mounk as outraged by "indifference of people like Tolentino (and me) to the LLM “revolution.”" But look at Mounk or Tolentino's actual pieces, and there's actual factual claims that they're making, not just vague vibes that they're bouncing off each other; the central criticism Mounk has is whether Tolentino's piece and its siblings are actually engaging with what LLMs can change rather than complaining about a litany of lizardman evils. (At least deBoer's not falsely calling anyone a rapist, this time.)
((Tbf, Mounk, in turn, is just using Tolentino as a springboard; her piece is actually about digital disassociation and the increasing power of AIgen technologies that she loathes. It's not really the sorta piece that's supposed to talk about how you grapple with things, for better or worse.))
But ultimately, that's just not the point. None of deBoer's readers are going to treat him any less seriously because of ChessLLM (or because many LLMs will, in fact, both say they reason and quod erat demonstratum), or because deBoer turns "But in practice, I too find it hard to act on that knowledge." into “I too find it hard to act on that knowledge [of our forthcoming AI-driven species reorganization]” when commenting on an essay that does not use the word "species" at all, and only uses "organization" twice in the same paragraph to talk about regulatory changes, and when "that knowledge" is actually just Mounk's (imo, wrong) claim that AI is under-hyped. That's not what his readers are paying him for, and that's not why anyone who links to him in the slightly most laudatory manner is doing so.
The question of Bulverism versus factual debate is an important one, but it's undermined when the facts don't matter, either.
There's the Dodo Bird Verdict take where the precise practice of psychotherapy doesn't matter so much as certain very broad bounds of conduct are followed. If an hour talking with a slightly sycophantic voice is all it takes to ground people, that'll be surprising to me, but it's not bad.
Of course, there are common factors to the common factors theory. Some of the behaviors that are outside of those bounds of conduct can definitely fuck someone up. Some of them aren't very likely for an LLM to do (I guess it's technically not impossible for an LLM to 'sleep with' a patient if we count ERP, but it's at least not a common failure mode), but others are things LLMs are more likely to do that human therapists won't even consider ('oh it's totally normal to send your ex three million texts at 2am, and if they aren't answering right away that's their problem').
I'm a little hesitant to take any numbers for chatGPT psychosis seriously. The extent reporting is always tied to the most recognizable LLM is a red flag, and self_made_human has made a pretty good argument that we wouldn't be able to distinguish the signal from the noise even presuming there were signal.
On the other hand, I know about mirror dwellers. People can and do use VR applications as a low-stress environment for developing social skills or overcoming certain stressors. But some portion do go wonky in a way that I'm really skeptical they were breaking before. Even if they were going to have problems, otherwise, I don't think they'd have been the same problems.
((On the flip side, I'll point out that Ani and Bad Rudi are still MIA from iOS. I would not be surprised to see large censorship efforts aimed at even all-ages-appropriate LLM actors, if they squick the wrong people out.))
I'll generally defend PEPFAR on its own merits, but the blackpill for PEPFAR-as-promoted is less about the effectiveness of the drugs themselves, and what the actual provisioning of even very effective drugs actually looks like, on the ground. This discussion is specific to PrEP (and this context that got me to write it up), but as far as I can tell it's pretty endemic to the program in the areas it's most critical.
That might change literally overnight if a full cure, extremely long-lasting PrEP, or sufficiently easy and effective vaccine comes about and is accepted, but I'm not highly confident for even that.
- Prev
- Next
There is a very well-known robust cellphone case manufacturer. There is also a gay furry porn website. One is known as Otterbox. One is known as Otterlocker. I'm pretty sure you can guess what happened, here. And boy was my face red.
I've also had to consider very carefully what image edit tools to recommend, because I have a go to, and it's pretty robust if not the best GUI. But I also can't tell randos to install GIMP on their home computers without ending up on a list.
More options
Context Copy link