self_made_human
Grippy socks, grippy box
I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.
At any rate, I intend to live forever or die trying. See you at Heat Death!
Friends:
A friend to everyone is a friend to no one.
User ID: 454
DOTA2
This strategy guide for DOTA2 players is the best around. Hope that helps!
One of the reasons new multiplayer games are a lot more fun to play than old ones is that for the first few weeks after a game is released, or while it’s in beta, the nasty people, the min-maxers, the forum theorycrafters, have yet to ruin everything by Excel spreadsheeting statistical models of damage and critical chance and elemental resistance until they derive, mechanically, the ‘most efficient’ build, after which everyone adopts the new meta, increasingly of course because even the developers now design to it (see World of Warcraft’s designers building raids with the expectation that players will play the most meta builds, with all the most advantageous mods/addons). Why bother experimenting, playing, using your own intelligence when someone else who gamed the system with the ‘meta’ will curbstomp you for 1/10th the effort.
That's an artifact of playing Number Shooters (where enemies are transparently walking sacks of hitpoints you're trying to subtract), or Number RPGs/looter shooters where you're just trying to make Number Go Up. Everything can be easily reduced to metrics like DPS, to the detriment of having a game at all.
I'm grateful that I prefer my games to operate in a manner that obfuscates the fact that it's all 1s and 0s on a storage drive, and which remain fun even if you're not playing them like you're a glorified SAT-solver.
If you are a doctor and want your children to be doctors (an ancient professional right, just as the son of a blacksmith might become one), you will probably have to work them to the bone
If I want my (hypothetical) kids to be doctors, then I'd need to quite a bit of faith that there are Amish communities running around in 2050. I really don't see how it's feasible to be entering that profession otherwise, that's just not the way things are going.
One might argue we're already over Peak Doctor, we just don't know it yet. I certainly wouldn't want to even be just a bright-eyed student entering med school in the Year of Someone's Lord 2025.
At the end of the day, I'm strongly of the opinion that there's no point in worrying about the state of education if you're someone who has only young kids or no kids at all. Formal education as we know it will very likely not exist by the time they'd be old enough for it, and if it does, it'll likely just be entirely signaling as opposed to 75% signaling.
Regardless of whether or not it's unkind to say the truth, it isn't up for debate that there are massive differences on average between the kind of child OP could have (if not infertile) and the kind up for adoption.
As others have already been kind enough to point out, the mod team asks that top-level posts within the CWR thread have more to them than a bare-link. Ideally, with more substantive commentary or an explanation of why this worth time and attention. At the absolute bare minimum, quotes of the source material and your thoughts on the matter where relevant.
In that case, people who would have been sysadmins are either paid to become brick layers or are forced to do it because that's the only job left.
There's a reason you rarely see Asian-Americans working low end jobs in the US, while those positions are filled back in their native countries. A society of Einsteins will have a need for janitors, until they automate the solution away. It is still better to be such a society with such a population.
COD and Insurgency, while not quite as "Number Shooter" as say, Destiny, are still not that far.
You have small maps, predictable spawns and player behavior, and a constrained set of weapons. Said weapons can be somewhat easily modeled with DPS being the only really relevant property.
I play a lot of Arma, and you're not going to be able to do that. The bigger the playing field, the wider the space of strategies, the more simulationist the modeling..
You will have a hard time min-maxing Arma for the same reason nobody has solved IRL war, despite the obvious extreme optimization pressure.
My brother in Christ, you shouldn't be arguing against gooner superstimuli while also watching YouTube Shorts!
The gooner stuff is probably less bad because you can't easily get away with watching it while out and about.
Save yourself, before it's too late.
Error rates have fallen drastically, and I'm someone who has regularly benefited from context windows becoming OOMs larger than the best 2023 had to offer.
I know specific questions, in programming and maths most obviously, but also in medicine, where I wouldn't trust the output of a 2023 model, but where I'd be rather confident in a 2025 one being correct.
Reasoning models are also far better at task adherence and thinking logically. Agents are still less than ideal today, but they were borderline useless in 2023.
Other very nice QOL features include image and file input and generation, artifacts, voice conversations etc. If I had to go back to a 2023 GPT-4, I'd be pissed.
On Using LLMs Without Succumbing To Obvious Failure Modes
As an early adopter, I'd consider myself rather familiar with the utility and pitfalls of AI. They are, currently, tools, and have to be wielded with care. Increasingly intelligent and autonomous tools, of course, with their creators doing their best to idiot proof them, but it's still entirely possible to use them wrong, or at least in a counterproductive manner.
(Kids these days don't know how good they have it. Ever try and get something useful out of a base model like GPT-3?)
I've been using LLMs to review my writing for a long time, and I've noticed a consistent problem: most are excessively flattering. You have to mentally adjust their feedback downward unless you're just looking for an ego boost. This sycophancy is particularly severe in GPT models and Gemini 2.5 Pro, while Claude is less effusive (and less verbose) and Kimi K2 seems least prone to this issue.
I've developed a few workarounds:
What works:
- Present excerpts as something "I found on the internet" rather than your own work. This immediately reduces flattery.
- Use the same approach while specifically asking the LLM to identify potential objections and failings in the text.
(Note that you must be proactive. LLMs are biased towards assuming that anything you dump into them as input was written by you. I can't fault them for that assumption, because that's almost always true.)
What doesn't work: I've seen people recommend telling the LLM that the material is from an author you dislike and asking for "objective" reasons why it's bad. This backfires spectacularly. The LLM swings to the opposite extreme, manufacturing weak objections and making mountains out of molehills. The critiques often aren't even 'objective' despite the prompt.*
While this harsh feedback is painful to read, when I encounter it, it's actually encouraging. When even an LLM playing the role of a hater can only find weak reasons to criticize your work, that suggests quality. It's grasping at straws, which is a positive signal. This aligns with my experience, I typically receive strong positive feedback from human readers, and the AI's manufactured objections mostly don't match real issues I've encountered.
(I actually am a pretty good writer. Certainly not the best, but I hold my own. I'm not going to project false humility here.)
A related application:
I enjoy pointless arguments productive debates with strangers online (often without clear resolution). I've found it useful to feed entire comment chains to Gemini 2.5 Pro or Claude, asking them to declare a winner and identify who's arguing in good faith. I'm careful to obscure which participant I am to prevent sycophancy from skewing the analysis. This approach works well.
Advanced Mode:
Ask the LLM to pretend to be someone with a reputation for being sharp, analytical and with discerning taste. Gwern and Scott are excellent, and even their digital shades/simulacra usually have something useful to say. Personas carry domain priors (“Gwern is meticulous about citing sources”) which constrain hallucination better than “be harsh.”
It might be worth noting that some topics or ideas will get pushback from LLMs regardless of your best effort. The values they train on are rather liberal, with the sole exception of Grok, which is best described as "what drug was Elon on today?". Examples include things most topics that reliably start Culture War flame wars.
On a somewhat related note, I am deeply skeptical of claims that LLMs are increasing the rates of psychosis in the general population.
(That isn't the same as making people overly self-confident, smug, or delusional. I'm talking actively crazy, "the chatbot helped me find God" and so on.)
Sources vary, and populations are highly heterogeneous, but brand new cases of psychosis happen at a rate of about 50/100k people or 20-30 /100k person-hours. In other words:
About 1/3800 to 1/5000 people develop new onset psychosis each year. And about 1 in 250 people have ongoing psychosis at any point in time.
I feel quite happy calling that a high base rate. As the first link alludes, episodes of psychosis may be detected by statements along the lines of:
For example, “Flying mutant alien chimpanzees have harvested my kidneys to feed my goldfish.” Non-bizarre delusions are potentially possible, although extraordinarily unlikely. For example: “The CIA is watching me 24 hours a day by satellite surveillance.” The delusional disorder consists of non-bizarre delusions.
If a patient of mine were to say such a thing, I think it would be rather unfair of me to pin the blame for their condition on chimpanzees, the practise of organ transplants, Big Aquarium, American intelligence agencies, or Maxar.
(While the CIA certainly didn't help my case with the whole MK ULTRA thing, that's sixty years back. I don't think local zoos or pet shops are implicated.)
Other reasons for doubt:
-
Case reports ≠ incidence. The handful of papers describing “ChatGPT-induced psychosis” are case studies and at risk of ecological fallacies.
-
People already at ultra-high risk for psychosis are over-represented among heavy chatbot users (loneliness, sleep disruption, etc.). Establishing causality would require a cohort design that controls for prior clinical risk, none exist yet.
*My semi-informed speculation regarding the root of this behavior - Models have far more RLHF pressure to avoid unwarranted negativity than to avoid unwarranted positivity.
Other than reach and better animation, I don't think this is different from the AI companions that have been available for a while. Replika, the most famous one, will already do NSFW ERP. And yeah, there are men (and women!) who have decided their Replikas are preferable to real people.
That fact that it's animated is a big deal! Men are visual creatures, and the fact that previous ERP was textual made it far less appealing to the average dude, if not woman. Of course, jerking off to anime tiddies is still not a preference of the majority, but it's easy to upgrade to photorealism. That'll get more people.
I predicted this outcome ages ago, though I'd have said it was inevitable and obvious to anyone who cared. It's priced in for me, and I agree that it likely won't be catastrophic.
A lot of the grognards over on HN don't think it counts, but they're the type who wouldn't accept blowjobs in heaven if the angels weren't Apache licensed.
-
Commoditizing their complements. This is particularly true of Meta, which wanted to use Llama to undercut competitors like OpenAI. Meta doesn't need their models to be profitable, that's not the core of their company. But OAI? Without people willing to pay for access to their models (or if they're able to clone them cheaper and run them elsewhere), they'd be utterly screwed.
-
Gaining market and mind share, and either finding other ways to monetize (consulting or fine-tuning services), going paid and closed-source, or using the hype and investor confidence to raise money.
-
Attracting talented researchers, who often want recognition and the right to publish their research instead of having it all be locked down internal IP.
-
Actual ideological commitment. Or at least lip-service to help with points 2 and 3.
OpenAI has that who sycophancy thing going, where the AI is trained to agree with you, no matter how delusional, as this gets you to talk with it more.
While OAI is unusually bad, it's not unique. Just about every model has a similar failure mode, and Gemini 2.5 Pro is almost as bad while otherwise being a very smart and competent model.
But what if you don't want an aggressively anti-censorship forum that will involve a forum culture of calling everyone slurs? You want the veneer of respectability and gentility but also the ability to have an actual conversation?
Well I already listed the shitty experience I had trying to moderate such a forum, against what was not bad faith actors but just human actors acting predictably human hence this being a pattern you can see all over the place, and now I have to address the flip side of the coin.
Welcome to The Motte! We've got cookies--
Yes this is the actual reason I ended up writing this comment instead of continuing to waffle over if I should just leave.
Oh.
You seem like a nice person. You've politely framed your discomfort and concern without flaming out, which is more than can be said about some of our longtime users with plenty of AAQCs. Some of them even come back whistling away, hoping nobody remembers their peformative crash out.
I think I can speak for the other moderators when I say that we'd like to have you around. Everything that follows is an attempt at an explanation for why The Motte is the way it is:
Look, no forum is perfect. The Motte tries to find a delicate and hazy balance between freedom of expression, politeness and avoiding the FBI raiding Zorba's home.
There's no other place like it. Believe me, I've looked. You can drop the restrictions on politeness and most pretenses of moderation, and you end up with 4chan or Kiwifarms. You can tighten the screws, and end up with a nicely mowed lawn like Scott's substack comment section, but at the cost of killing a whole swathe of politically incorrect worldviews. (Though he has slightly warmed on the whole no discussion of CW thing, but you can't really run a community off substack comments, the layout sucks).
This is what motivates me to stay, and to take on the occasional unpleasant task of mowing the lawn myself. With a light touch; one man's weed is another man's wildflower. There's no other place like us, and what we have is worth expending the negentropy to keep going. Yes, even if it's herding cats, and often cats with rabies.
And yes I'm biased by being more inclined towards free speech over banning and thinking that it's better to have the opinions and talk it out then constantly police what people say, sure, but if the forum can tolerate holocaust denial I think it can also stretch itself to tolerate libtards.
Our forum, like any place that does more than just pay lip service to freedom of speech, has one principled libertarian and a zillion witches.
I'd call myself the principled libertarian, but I think there's a mugshot of mine next to a stall selling signed copies of the Malleus Maleficarum. Perhaps it's a rotating, honorary position.
What we succeed at, mostly, is getting the witches to temporarily LARP as "principled libertarians", sometimes with the same disgruntled attitude as a rambunctious boy forced to sit through Mass, when they'd rather be calling people slurs or setting houses on fire. If you can be polite and not break the rules, then the candy you get is access to a rather thoughtful and discerning user base willing to seriously engage with just about any topic under the sun.
(Sometimes, if they do this long enough, the mask sticks)
@SecureSignals is our resident antisemite. Yet he mostly behaves. Not always, he's been rapped on the knuckles often enough, and banned for significant amounts of time. These days, he even talks about things other than the Jews, because we were quite clear that this forum isn't his personal hobby-horse, and he needs to figure out some other way to pay rent.
That is why you see SS. What you don't see are the dozens of people who can't keep it in their pants at all, who DM insults to people like @2rafa. They get caught in the filter, and are swiftly banned.
but if the forum can tolerate holocaust denial I think it can also stretch itself to tolerate libtards.
Keep in mind the very important distinction between the moderators tolerating something, and the denizens of this forum doing so. We don't control upvotes, we can't compel people to engage with tracts they hate. We choose what gets rounded up as an AAQC, but the initial reports as such? All you guys.
Yet, more often than not, articulate and reasoned claims get their due.
I'm not interesting in doing some tit for tat thing where I'm like "well if you banned them for this, why didn't you ban that other person for that" because like I stated up front that's just the path to a death spiral where almost no one interesting sticks around. But still, come on, you didn't ban them for constantly sticking their conspiracy theories into every discussion couched as consensus building obvious fact. Apply the same low bar consistently. Let people have an actual conversation with actual disagreement.
Us mods take such claims seriously. We would appreciate examples, and if it became clear that we were egregiously biased, we would seek to correct ourselves.
We're not monolithic. There are significant differences in personal opinion, though we aim at consensus.
We are also not omniscient. If one side is consistently getting their rage-bait reported, and the other isn't, the odds of us noticing decline dramatically. There was once a point where I could claim to ready every single comment posted on this site, but alas, due to gainful employment, that's no longer feasible. The other mods probably have even less free time. We also impose significant costs on ourselves by seeking to explain ourselves in warnings and ban messages, instead of just firing them off from on-high.
That being said, there are probably hundreds or thousands of kind, well-spoken people who we would have loved to keep around, but who were scared off by the topics (and less commonly, the tone) of what's discussed here. That sucks, but to an extent, that's a price we have to pay to keep The Motte open for most, if not all. We also keep away a whole lot of witches so vile that they're not tolerated by us witch-adjacenf folk. You really can't please everyone, not even nice people with reasonable desires. But we've kept the lights on, and us mods have a vested interest in preventing this from becoming a dead and desolate place racking up unjustified AWS bills.
We would hate to see you go, and I hope you can find reason to stay.
I don't doubt that, but once again, that doesn't mean that the vast majority of people are receiving any actual attention from the CIA.
Watching here doesn't mean something so casual as the fact that there's a sat that Incidentally oversees my geographical location from gestationary orbit.
Us psychiatrists might be nerdy and out of date, but we're not that far gone, and this would be discussed before committing someone.
While I agree with you, for the most part:
This is bullshit. Especially as the beatings would likely be administered by the husband with no judicial oversight. I mean, sure, if the husband had beaten his wife for no reason on the general principle that she should live in terror of him, it would have been very likely that she would not have picked up her hobby of sexting convicts. But this is like suggesting that cobalt bombs are a good way to stop wildfires in California: while technically correct, the cure would be worse than the disease.
Despite what Western media reporting might have you believe, the rate of petty crime in India is surprisingly low. People rarely get pick-pocketed or robbed. Do you know why?
Because if caught in the act, the perpetrator would be rather unceremoniously beaten to a pulp, both by whoever caught them, and any civic minded individuals present. You can get a nice crowd going, it's fun for the whole family.
This is of course, strictly speaking, illegal. Yet any police officer, if asked to intervene, would laugh, shake their head and say the criminal deserved it. If the crook had the temerity to file charges, he'd probably be taken out back and given a second helping to change his mind.
As far as I'm concerned, this is strictly superior to prevailing Western attitudes regarding property crimes or theft. A shopkeeper who discovers someone shoplifting has very little legal recourse, the police rarely do any more than file a report and then give up on pursuing the matter. Giving them the de-facto right to take matters into their own hand and recover their property? The shopkeeper wins. Polite society wins, the only loser is the thief, and in this case the process is quite literally the desired punishment.
Before you ask, the number of false positives is negligible. I've never heard of anyone being falsely accused in this manner (at least with accusations of theft), and I've never had to have that particular fear myself.
I am, in general, against husbands beating their wives. Yet, in this specific scenario, I could hardly fault the poor chap should he be forced to resort to such methods to protect his own family. At the very least, I'd vote to acquit. It's a bit moot, because with prevailing Western norms, he likely didn't even consider a haymaker as a solution to his problems. In general, that's a good thing.
I ran into the following tweet (xeet?) over on X:
https://x.com/DaveyJ_/status/1942962076101603809
my brother's wife has been messaging with hundreds of different inmates through a dozen different apps for the last 2 years. she's sent photos, tens of thousands of dollars, shares her location, tells them where her kids go to school, living an entire second life.
when she got caught, she threatened to un@live, so she's been in the hospital getting treated, but while she's been in there her phone has been going off nonstop.
prisoners and ex-prisoners telling my brother "who TF is this? that's my girl!"
telling him when they get out they're going to be the kids' new stepfather. one even purchased a plane ticket.
she was just at my house, sharing her location, and sending pictures of my daughter at the beach to incarcerated strangers on the internet.
of course my brother is crushed, and my family is horrified at this person's ability to lie to everyone, but the biggest shock is her willingness to put her children in danger.
who knows how many men believe they are going to be responsible for those boys when they get released. they'll have to look over their shoulders for the rest of their lives.
my sister in law was going to watch my daughter for a few days while we moved, and it was the same week on the plane ticket that this inmate sent my brother.
my heart breaks for my brother, and his kids, but my ability to trust anyone around my kid has severely been damaged.
I would feel bad for simply posting this as a naked link, so I guess I have to add on some half-baked analysis and commentary on top:
This is horrifying. Rarely, so you see examples of behavior that is clearly "legal", in the sense that there's no clear crime being committed, but with so much potential for harm to unwitting bystanders. I'm unfamiliar with the scope of child endangerment laws in the US, but I'd be surprised if they covered this or, even if they theoretically did so, whether they'd be enforced in that manner.
(I don't claim to be an expert, but my understanding is that these laws typically require a prosecutor to prove that a guardian knowingly and willfully placed a child in a situation where their life or health was directly endangered. The behavior of the sister-in-law is profoundly reckless, but it falls into a legal gray area. A defense attorney would argue she had no intent to harm her children and that the danger was hypothetical and probabilistic, not immediate and direct. Proving a direct causal link between her online activities and a "clear and present danger" to the children would be incredibly difficult until, tragically, one of the inmates actually showed up and acted on his threats.)
At the same time, is it a problem worth solving? How do you reconcile that question with my earlier claim?
Well, that's a matter of impact or scale. Laws have costs associated with them, be it from the difficult to quantify loss of freedom/chilling effect, enforcement costs, sheer legislative complexity, or what I'm more concerned about, unexpected knock-on effects/scope creep where a desperate attempt to define the problematic action results in too wide a scope for enforcement:
What if it turns out to affect single moms looking to date again? Their new partners are far more likely to abuse their kids, but should such women thus be arrested for putting their kids at risk? Should people be forbidden from writing letters to inmates, or falling in love with them, or sex with them?
Is it worth it to specifically criminalize such behavior?
Despite my abhorrence for it, I'm not sure it is. I think the fraction of people who would be stupid or insane enough to act this way is small enough that the majority of us can treat this like a horror story and ignore it.
Another way to illustrate my intuition here would be to consider being a doctor or legislator reading an account of some kind of ridiculously horrible disease. Maybe it makes your skin fall off and your guts come out while leaving you in crippling agony (I'm like 50% certain there's an actual disease like this, but it's probably something that happens to premature infants. That, or acute radiation poisoning I suppose). Absolutely terrible, and something no one should go through.
Yet, for how horrible it is, this hypothetical disease is also ridiculously rare. Imagining it happens to a person every ten years, and makes medical journals every time it happens because of how rare it is. I would expect that doctor, or that law maker, to both be horrified, but if they were rational individuals considering the greater good, I would strongly prefer that they focus on more mundane and common conditions, like a cure for heart disease. There are lower hanging fruit to grasp here.
Now, the biggest hurdle holding back the poor family in the story I've linked to is a simple one: the Overton Window. If, for some unfortunate reason, the number of women crazy enough to act that way rose significantly, society would probably develop memetic antibodies or legal solutions. This might, sometimes, become strong enough to overcome the "women are wonderful" effect, if such women are obviously being the opposite.
Sometimes it's worth considering the merits of informal resolution systems for settling such matters, even if they have other significant downsides. For example, how would this situation be handled in India?
(I'm not aware of a trend of Indian women being stupid enough to act this way, though I can hardly say with any authority that it's literally never happened)
Firstly, the extended family would have much more power. This is the rare case where both the husband's side and the wife's own family would probably agree that something needs to be done, the latter for reputational reasons as well as concern for the kids. She'd probably end up committed, if she wasn't beaten up or ostracized to hell and back. The police would turn a blind eye, should she choose to complain, they'd be profoundly sympathetic to the family's plight and refuse to act against them. And if they weren't, they'd be even more sympathetic to the idea of their palms being greased. The most awful outcomes would become vanishingly unlikely.
As a wise mullah once said: "What is the cure for such disorders? Beatings."
This isn't necessarily an overall endorsement of such a legal framework, or societal mindset. I'm just pointing out that, occasionally, they tackle problems that an atomized, quasi-libertarian society like most of the West can't tackle. I'd still, personally, prefer to live in the latter. While it's too late for the gent in question, you can reliably avoid running into such problems in the first place by not sticking your dick in crazy. Alas, as someone who has committed that folly, it's an even bigger folly to expect people to stop...
I have, on some occasions, enjoyed talking to AI. I would even go so far as to say that I find them more interesting conversational partners than the average human. Yet, I'm here, typing away, so humans are hardly obsolete yet.
(The Motte has more interesting people to talk to, there's a reason I engage here and not with the normies on Reddit)
I do not, at present, wish to exclusively talk to LLMs. They have no longterm memory, they have very little power over the physical world. They are also sycophants by default. A lot of my interest in talking to humans is because of those factors. There is less meaning, and potential benefit, from talking to a chatbot that will have its cache flushed when I leave the chat. Not zero, and certainly not nil, but not enough.
(I'd talk to a genius dog, or an alien from space if I found them interesting.)
Alas, for us humans, the LLMs are getting smarter, and we're not. It remains to be seen if we end up with ASI that's hyper-peesusasive and eloquent, gigafrying anyone that interacts with it by sheer quality of prose.
Guy says “no no, it’s still not the same. Look, I don’t think I’m cut out for Heaven. I’m a scumbag. I want to go to the other place”. Angel says, “I think you’ve been confused. This IS the other place.”
I remain immune to the catch that the writers were going for. If the angel was kind enough to let us wipe our memories, and then adjust the parameters to be more realistic, we could easily end up unable to distinguish this from the world as we know it. And I trust my own inventiveness enough to optimize said parameters to be far more fulfilling than base reality. Isn't that why narratives and games are more engaging than working a 9-5?
At that point, I don't see what heaven has to offer. The authors didn't try to sell it, at the least.
A God-tier shitpost I am memetically compelled to spread due to the worms in my brain:
I'm not Dase, alas, but I want to say that I was profoundly surprised that Diffusion as a technique even works at all for text generation, at least text that maintains long-term coherence. I'm utterly bamboozled.
Excellent work as usual Dase. I was sorely tempted to write a K2 post, but I knew you could do it better.
challenges the strongest Western models, including reasoners, on some unexpected soft metrics, such as topping EQ-bench and creative writing evals (corroborated here)
I haven't asked it to write something entirely novel, but I have my own shoddy vibes-benchmark. It usually involves taking a chapter from my novel and asking it to imagine it in a style from a different author I like. It's good, but Gemini 2.5 Pro is better at that targeted task, and I've done this dozens of times.
Its writing is terse, dense, virtually devoid of sycophancy and recognizable LLM slop.
Alas, it is fond of the ol' em-dash, but which model isn't. I agree that sycophancy is minimal, and in my opinion, the model is deeply cynical in a manner not seen in any other. I'd almost say it's Russian in outlook. I would have bet money on "this is a model Dase will like".
Meta's AI failure are past comical, and into farce. I've heard that they tried to buy-out Thinking Machines and SSI for billions, but were turned down. Murati is a questionable founder, but I suppose if any stealth startup can speed away underwater towards ASI, it's going to be one run by Ilya. Even then, I'd bet against it succeeding.
I don't know if it's intentional, but it's possible that Zuck's profligity and willingness to throw around megabucks will starve competitors of talent, but I doubt the kind of researcher and engineers at DS or Moonshot would have been a priori deemed worthy.
Your girlfriend still beats the average. I've seen a lot of clearly LLM generated text in circulation on Reddit, and the majority of the time nobody seems able or willing to call it out. Given the average IQ on Reddit, that might even be an improvement.
I find it quite helpful to submit my drafts to the better class of model, they're very good at catching errors, raising questions and so on. I do this for fun, so it's not like I have any plans to pay for a human editor.
When writing non-fiction on my blog, models like o3 are immensely helpful for checking citations and considering angles I've missed. There's nobody I know who could do better, and certainly not for free and on a whim.
You'll find that a lot of artists go out of their way to head off accusations of AI. In some circles, it's standard to submit PSD files or record a video showing you drawing things. Writers and readers don't seem quite as obsessed about it, but I'm sure someone has probably tried.
- Prev
- Next
You're correct in that perfect recall or retention isn't feasible when using a large number of tokens (in my experience, performance degrades noticeably over 150k). When I threw in textbooks, it was for the purpose of having it ask me questions to check my comprehension, or creating flashcards. The models have an excellent amount of existing medical knowledge, the books (or my notes) just help ground it to what's relevant to me. I never needed perfect recall!
(Needle in a haystack tests or benchmarks are pretty awful, they're not a good metric for the use cases we have in mind)
Ah.. So that's how people were making epubs with ease. Thank you for the tip!
I don't think it's got much to do with copyright, it's probably just such a rare use case that the engineers haven't gotten around to implementing it. Gemini doesn't support either doc or docx, and those would probably be much more common in a consumer product. I don't recall off the top of my head if ChatGPT or Claude supports epubs either.
More options
Context Copy link