TheAntipopulist
Formerly Ben___Garrison
No bio...
User ID: 373
Monarchies were just the dictatorships of old.
I think needing to have "meaning in your life" is largely overrated. Life is largely something you just get through -- nature loved using the stick much more than the carrot. Modern society is extremely cushy in most ways, sanding off the edges of the stick. This is why I see populists as a natural enemy -- they want "burn it all down" for stupid reasons based largely on hallucinations, and they'd take my comfy pillows away in the process.
If I have any life goals, it would be to build something, probably a video game or maybe something with AI. I've made essentially zero progress in that goal, but I have no illusions that the fault lies with anyone other than myself for being excessively lazy.
while offering much higher stability and avoiding the dumber mistakes
Strong disagree on both of those statements. Democracies are the most stable form of government in existence since they allow for peaceful transfer of power. Hybrid regimes like those in the Sahel or Central America are notoriously unstable and chain coups like they're going out of style. More totalitarian states like Russia and China are more stable overall, and can seem even more stable than democracies... until they aren't. They're brittle and tend to shatter rather than undergo painful reforms. The biggest threat to democracies is rarely a big civil war, but rather descending into Orbanism.
And autocracies make stupid moves all the time. Zero Covid? Also, the whole Communist flavor of autocracies from 1945-1991 was a major unforced screwup.
Democracy simply does not work.
I agree with the other guy: It's the least awful form of government we have. The only real alternative is dictatorship, which is almost always worse overall for human flourishing. Every time I think the voters are too terminally stupid to be trusted to do much of anything, I watch a video like this and feel even worse about any alternative.
This is a good critique of the blackpiller mindset.
For what it's worth I asked ChatGPT if there was a more well-known term for the "traction" you were talking about, and it said "self-efficacy", which I think is pretty close but maybe not entirely aligned with the vibes you were going for.
The main problem I have with blackpilled monk types (and this post is pretty archetypal blackpill despite claiming otherwise) is that it can work while you're younger but it has an expiration date. Eventually you'll have a crisis and medical expenses. What then? If you have no savings then you'll either need to forgo medical care or do the leech thing where you receive medical care and then simply don't pay for it. What happens when you're 60 or 70 and too old to work? If you've calculated everything and know Social Security will get you through it, then OK, that seems fine to me. You do you.
I'd still somewhat worry about peoples' (really just men's) inherent existentialism. Modern generations grow up on Disney movies that tell them life should be wonderful and meaningful, and that'll largely not be true for blackpillers. It won't be horrible overall, but they'll lack a lot of the self-actualization they think they deserve. If they're fine with that then that's OK again, but a lot of them eventually start screeching about how "the system has failed them" and how we need to "burn it all down" just because they were too foolish to make different life choices.
Because the average voter is intensely stupid about these types of things. On the left you have fools cheering for images of burning Waymos and waving the Mexican Flag in US cities. On the right, the average Republican is at the level of Catturd, and they evaluate things based on what they see on Tiktok and Fox News. If they don't see armored goons manhandling immigrants then they think it's not happening at all. Trying to explain things like "employment incentives" to them will go in one ear and out the other.
For me the voting patterns were very consistent: anything I posted that was pro-right was upvoted even if it was devoid of logic. Anything I posted that was neutral (like on AI stuff) was generally upvoted if it was high-effort. Anything I posted that was anti-MAGA was highly contentious or net-downvoted, with poorly thought out responses by others on the level of "have you ever considered that maybe you're too retarded to understand Trump's brilliant 4D chess move?????" getting broadly upvoted. In other words, the upvotes and downvotes are mostly just an inverse mirror of /r/politics.
I'm actually fine with long time quality posters getting a bit more slack than randos, although I have some problems with how quality is determined, as there's a fair few AAQCs every month that are just swipes at the (leftist) outgroup in eloquent language.
One tip that can help: block the upvote/downvote numbers through Ublock origin. I'm using this filter:
www.themotte.org##button.m-0.p-0.nobackground.caction.btn > .score
You can also create the rule by right clicking, clicking "block element", then hovering over the upvote numbers.
Upvotes and downvotes really have no place on a political discussion site like this, as all they do is add unnecessary heat and a "boo outgroup" button for partisans to click. I found it very annoying when some MAGA clown would post low-effort sneers to my posts and get tons of upvotes since this site leans heavily right, and I found it'd cause me to react in ways that weren't helpful. Forcefully ignoring the upvotes has made the site much more tranquil in my eyes.
Another tip that can help: make concrete rules around discussions you want to have, and then stick to them, and then be willing to block people who break them. The mods on this site, while better than on many sites, are still pretty arbitrary and capricious. It's not uncommon for them to modhat leftists or centrists for things right-leaning commenters get away with all the time. The solution: block people who violate the rules. For me, I've started drawing a line at personal attacks and ad hominems. I (almost) never do those things to other posters here, and if anyone does it to me I block them in short order. What I've noticed is that a lot of the people who do that (like zeke and SlowBoy and FirmWeird) just post low effort partisan swipes almost exclusively, so you don't really lose much by blocking them. I did block Gattsuru when he was making personal attacks against me and refused to stop, which was somewhat sad since he posts a combination of low effort partisans wipes along with higher quality partisan swipes, so blocking isn't completely costless, but it's still good overall.
As for what arguments are actually for, I've found them quite useful to see the strongest arguments the other side has presented in short order. If you have a few arguments and they just have no clear response to something, you can be pretty sure that what you're saying is right on the money. For example, I had an argument with JarJarJedi on allegations that Joe Biden was accepting bribes and although he talked a bit game about how I was delusional if I disagreed with him on this point, it became clear he just had no evidence on this point. I'm now much more confident in my assertion that anyone saying Joe Biden took bribes is just spouting nonsense.
This is a good mod action -- excessive caps lock is annoying and not helpful for a site like this.
She definitely already knew in an abstract sense. But there's a big difference between knowing about it in an abstract sense, and seeing a parade of strangers recite how much they hate a person and think she's a vile piece of trash. This is doubly true if it seems like nobody is defending her, like it's somehow a consensus that she's vile trash.
I feel really bad for her. She produces some genuinely novel insights into the workings of sex and relationships that I doubt academia could ever do. You have to have a thick skin to post on the internet, and that's doubly true for women who 1) tend to get attacks that are far more personal and nasty, and 2) tend to have higher neuroticism scores than men, so the attacks wound more deeply.
Your mod action didn't make the distinction that you were only against that part, and made it seem like you thought the entire message was AI generated.
I agree having that part at the end is sloppy... but it's sloppy to the level of "a few spelling mistakes". That shouldn't be worth modding someone over unless it becomes egregious.
This is like asking people why they like talking to friends or therapists about their life. That's what LLMs are to a lot of people -- an easy-to-access albeit somewhat low quality friend or therapist. As someone who has friends and doesn't need therapy, I also don't do that much, but I can understand why some might.
Also, LLMs are actually really good for generating NSFW if you're into that. Janitor AI with a Deepseek API hookup is excellent and quite novel.
Huh, I didn't know Ublock Origin was that granular. I use it to remove upvote numbers on this forum already, but didn't know it could be used to block YT recs. Thanks for the tip.
It would be better to have a quality filter then.
If there's one place I doubt AI will improve much in the near future, it's stakeholder management. That's why I think even if AI becomes an astronomically better coder than the average SWE, that SWE's could just rebrand as AI whisperers and translate the nuances of a manager's human-speak into AI prompts. Maybe it'll get there eventually, but we're still a good ways off from non-technical people being able to use AI to get any software they want without massive issues arising. The higher up in the org you are, the bigger a % of your job that stakeholder management becomes. I think we agree on this point overall.
On less well-known systems and APIs, I think the hallucination issue is more of a skill issue (within reason, I'm not making an accusation here). I'm translating a bunch of SQR (a niche language you've probably never heard of) queries to an antiquated version of TSQL right now, and the AI indeed hallucinates every now and then, but it's in predictable ways that can be solved with the right system prompts. E.g. sometimes it will put semicolons at the end of every line thinking its in a more modern version of SQL, and I have to tell it not to do that which is somewhat annoying, but simply writing a system prompt that has that information cuts down that issue by 99%. It's similar for unknown APIs -- if the AI is struggling, giving it a bit of context usually resolves those problems from what I've seen. Perhaps if you're working in a large org with mountains of bespoke stuff then the giving an AI all that context would just overwhelm it, but aside from that issue I've still found AI to be very helpful even in more niche topics.
On the time saved, you might want to be on the lookout for the dark leisure theory for some folks, while for others the time savings of using AI might be eaten up somewhat by learning to use the AI in the first place. I agree that the productivity boost hasn't been astronomical like some people claim, but I think it will increase over time as models improve, people become more skilled at AI, and people using AI to slack off get found out.
I agree that this stuff is becoming more and more difficult to tell apart. We even had one of our own posters get falsely accused by the mods of using AI recently. People are going to claim many things are "obviously AI" when they actually aren't, and the mania of false accusations is going to tick a lot of people off. When you're accused of using AI, not only are people saying you're committing artistic fraud, they're also implying that even if you aren't then your output is still generic trash to some extent.
I wish the Luddites would go away and we could all just judge things by quality rather than trying to read tea leaves on whether AI had a hand in creating something.
This also 100% applies to this forum's rule effectively banning AI. It's a bad rule overall.
That could have been illegal still, but R's investigating it mostly dismissed it since the eyewitnesses consistently said Joe just made chitchat and never even discussed official topics, let alone agree to do specific things. They went down checking for bribes instead since they thought that would be a more fruitful endeavor although that too uncovered nothing.
it also seems like something that (for some people) can feel like more of a productivity boost than it is due to time being spent differently
I also wonder about this. I think in particularly bad cases it can be true, since if something doesn't work it becomes very tempting to just reprompt the AI with the error and see what comes back. Sometimes that works on a second attempt, and in other times I'll go back and forth for a dozen prompts or so. Whoops, there went an entire hour of my time! I'm trying to explicitly not fall into that habit more than I already have.
Overall I'd say it's a moderate productivity boost overall even factoring that in, and it's getting slowly better as both AI models improve and my skill in using them also improves.
I dont think kickbacks to Joe personally are especially relevant?
They were the only thing that was really relevant, because that would have been fully illegal and thus an easily impeachable offense. Having family members sell access isn't illegal assuming no quid-pro-quo, it's just an optics problem.
And judging by votes
It still baffles me how people think popular = correct in terms of political arguments. Is it not well known that posting conservative opinions on a leftist-dominated forum like /r/politics would almost certainly be overwhelmed by downvotes, or vice-versa for lefty opinions to a conservative forum?
Yeah, mental deterioration is something I also fear. I'm mostly fine living in solitude, but I do have fears of tail-risks involving medical episodes that could be fixed by just having someone to call an ambulance or tell me I've lost it.
These are strong words to say when you're young, and I've heard this sentiment from many people, but I've seen very few actually follow through.
More options
Context Copy link