TheAntipopulist
Formerly Ben___Garrison
No bio...
User ID: 373
Because the average voter is intensely stupid about these types of things. On the left you have fools cheering for images of burning Waymos and waving the Mexican Flag in US cities. On the right, the average Republican is at the level of Catturd, and they evaluate things based on what they see on Tiktok and Fox News. If they don't see armored goons manhandling immigrants then they think it's not happening at all. Trying to explain things like "employment incentives" to them will go in one ear and out the other.
For me the voting patterns were very consistent: anything I posted that was pro-right was upvoted even if it was devoid of logic. Anything I posted that was neutral (like on AI stuff) was generally upvoted if it was high-effort. Anything I posted that was anti-MAGA was highly contentious or net-downvoted, with poorly thought out responses by others on the level of "have you ever considered that maybe you're too retarded to understand Trump's brilliant 4D chess move?????" getting broadly upvoted. In other words, the upvotes and downvotes are mostly just an inverse mirror of /r/politics.
I'm actually fine with long time quality posters getting a bit more slack than randos, although I have some problems with how quality is determined, as there's a fair few AAQCs every month that are just swipes at the (leftist) outgroup in eloquent language.
One tip that can help: block the upvote/downvote numbers through Ublock origin. I'm using this filter:
www.themotte.org##button.m-0.p-0.nobackground.caction.btn > .score
You can also create the rule by right clicking, clicking "block element", then hovering over the upvote numbers.
Upvotes and downvotes really have no place on a political discussion site like this, as all they do is add unnecessary heat and a "boo outgroup" button for partisans to click. I found it very annoying when some MAGA clown would post low-effort sneers to my posts and get tons of upvotes since this site leans heavily right, and I found it'd cause me to react in ways that weren't helpful. Forcefully ignoring the upvotes has made the site much more tranquil in my eyes.
Another tip that can help: make concrete rules around discussions you want to have, and then stick to them, and then be willing to block people who break them. The mods on this site, while better than on many sites, are still pretty arbitrary and capricious. It's not uncommon for them to modhat leftists or centrists for things right-leaning commenters get away with all the time. The solution: block people who violate the rules. For me, I've started drawing a line at personal attacks and ad hominems. I (almost) never do those things to other posters here, and if anyone does it to me I block them in short order. What I've noticed is that a lot of the people who do that (like zeke and SlowBoy and FirmWeird) just post low effort partisan swipes almost exclusively, so you don't really lose much by blocking them. I did block Gattsuru when he was making personal attacks against me and refused to stop, which was somewhat sad since he posts a combination of low effort partisans wipes along with higher quality partisan swipes, so blocking isn't completely costless, but it's still good overall.
As for what arguments are actually for, I've found them quite useful to see the strongest arguments the other side has presented in short order. If you have a few arguments and they just have no clear response to something, you can be pretty sure that what you're saying is right on the money. For example, I had an argument with JarJarJedi on allegations that Joe Biden was accepting bribes and although he talked a bit game about how I was delusional if I disagreed with him on this point, it became clear he just had no evidence on this point. I'm now much more confident in my assertion that anyone saying Joe Biden took bribes is just spouting nonsense.
This is a good mod action -- excessive caps lock is annoying and not helpful for a site like this.
She definitely already knew in an abstract sense. But there's a big difference between knowing about it in an abstract sense, and seeing a parade of strangers recite how much they hate a person and think she's a vile piece of trash. This is doubly true if it seems like nobody is defending her, like it's somehow a consensus that she's vile trash.
I feel really bad for her. She produces some genuinely novel insights into the workings of sex and relationships that I doubt academia could ever do. You have to have a thick skin to post on the internet, and that's doubly true for women who 1) tend to get attacks that are far more personal and nasty, and 2) tend to have higher neuroticism scores than men, so the attacks wound more deeply.
Your mod action didn't make the distinction that you were only against that part, and made it seem like you thought the entire message was AI generated.
I agree having that part at the end is sloppy... but it's sloppy to the level of "a few spelling mistakes". That shouldn't be worth modding someone over unless it becomes egregious.
This is like asking people why they like talking to friends or therapists about their life. That's what LLMs are to a lot of people -- an easy-to-access albeit somewhat low quality friend or therapist. As someone who has friends and doesn't need therapy, I also don't do that much, but I can understand why some might.
Also, LLMs are actually really good for generating NSFW if you're into that. Janitor AI with a Deepseek API hookup is excellent and quite novel.
Huh, I didn't know Ublock Origin was that granular. I use it to remove upvote numbers on this forum already, but didn't know it could be used to block YT recs. Thanks for the tip.
It would be better to have a quality filter then.
If there's one place I doubt AI will improve much in the near future, it's stakeholder management. That's why I think even if AI becomes an astronomically better coder than the average SWE, that SWE's could just rebrand as AI whisperers and translate the nuances of a manager's human-speak into AI prompts. Maybe it'll get there eventually, but we're still a good ways off from non-technical people being able to use AI to get any software they want without massive issues arising. The higher up in the org you are, the bigger a % of your job that stakeholder management becomes. I think we agree on this point overall.
On less well-known systems and APIs, I think the hallucination issue is more of a skill issue (within reason, I'm not making an accusation here). I'm translating a bunch of SQR (a niche language you've probably never heard of) queries to an antiquated version of TSQL right now, and the AI indeed hallucinates every now and then, but it's in predictable ways that can be solved with the right system prompts. E.g. sometimes it will put semicolons at the end of every line thinking its in a more modern version of SQL, and I have to tell it not to do that which is somewhat annoying, but simply writing a system prompt that has that information cuts down that issue by 99%. It's similar for unknown APIs -- if the AI is struggling, giving it a bit of context usually resolves those problems from what I've seen. Perhaps if you're working in a large org with mountains of bespoke stuff then the giving an AI all that context would just overwhelm it, but aside from that issue I've still found AI to be very helpful even in more niche topics.
On the time saved, you might want to be on the lookout for the dark leisure theory for some folks, while for others the time savings of using AI might be eaten up somewhat by learning to use the AI in the first place. I agree that the productivity boost hasn't been astronomical like some people claim, but I think it will increase over time as models improve, people become more skilled at AI, and people using AI to slack off get found out.
I agree that this stuff is becoming more and more difficult to tell apart. We even had one of our own posters get falsely accused by the mods of using AI recently. People are going to claim many things are "obviously AI" when they actually aren't, and the mania of false accusations is going to tick a lot of people off. When you're accused of using AI, not only are people saying you're committing artistic fraud, they're also implying that even if you aren't then your output is still generic trash to some extent.
I wish the Luddites would go away and we could all just judge things by quality rather than trying to read tea leaves on whether AI had a hand in creating something.
This also 100% applies to this forum's rule effectively banning AI. It's a bad rule overall.
That could have been illegal still, but R's investigating it mostly dismissed it since the eyewitnesses consistently said Joe just made chitchat and never even discussed official topics, let alone agree to do specific things. They went down checking for bribes instead since they thought that would be a more fruitful endeavor although that too uncovered nothing.
it also seems like something that (for some people) can feel like more of a productivity boost than it is due to time being spent differently
I also wonder about this. I think in particularly bad cases it can be true, since if something doesn't work it becomes very tempting to just reprompt the AI with the error and see what comes back. Sometimes that works on a second attempt, and in other times I'll go back and forth for a dozen prompts or so. Whoops, there went an entire hour of my time! I'm trying to explicitly not fall into that habit more than I already have.
Overall I'd say it's a moderate productivity boost overall even factoring that in, and it's getting slowly better as both AI models improve and my skill in using them also improves.
I dont think kickbacks to Joe personally are especially relevant?
They were the only thing that was really relevant, because that would have been fully illegal and thus an easily impeachable offense. Having family members sell access isn't illegal assuming no quid-pro-quo, it's just an optics problem.
And judging by votes
It still baffles me how people think popular = correct in terms of political arguments. Is it not well known that posting conservative opinions on a leftist-dominated forum like /r/politics would almost certainly be overwhelmed by downvotes, or vice-versa for lefty opinions to a conservative forum?
I'd like to think I'm reasonably good at coding considering it's my job. However, it's somewhat hard to measure how effective a programmer or SWE is (Leetcode style questions are broadly known to be awful at this, yet it's what most interviewers ask for and judge candidates by).
Code is pretty easy to evaluate at a baseline. The biggest questions are "does it compile", and "does it give you the result you want" can be evaluated in like 10 seconds for most prompts, and that's like 90% of programming done right there. There's not a lot of room for BS'ing. There are of course other questions that take longer to answer, like "will this be prone to breaking due to weird edge cases", "is this reasonably performant", and "is this well documented". However, those have always been tougher questions to answer, even for things that are 100% done by professional devs.
and they simply are not good at programming
At @self_made_human's request, I'm answering this. I strongly believe LLMs to be a powerful force-multiplier for SWEs and programmers. I'm relatively new in my latest position, and most of the devs there were pessimistic about AI until I started showing them what I was doing with it, and how to use it properly. Some notes:
-
LLMs will be best where you know the least. If you're working on a 100k codebase that you've been dealing with for 10+ years in a language you've known for 20+ years, then the alpha on LLMs might be genuinely small. But if you have to deal with a new framework or language that's at least somewhat popular, then LLMs will speed you up massively. At the very least it will be able to rapidly generate discrete chunks of code to build a toolbelt like a Super StackOverflow.
-
Using LLMs are a skill, and if you don't prompt it correctly then it can veer towards garbage. You'll want to learn things like setting up a system prompt and initial messages, chaining queries from higher level design decisions down to smaller tasks, and especially managing context are all important. One of the devs at my workplace tried to raw-dog the LLM by dumping in a massive codebase with no further instruction while asking for like 10 different things simultaneously, and claimed AI was worthless when the result didn't compile after one attempt. Stuff like that is just a skill issue.
-
Use recent models, not stuff like 4o-mini. A lot of the devs at my current workplace tried experimenting with LLMs when they first blew up in early 2023, but those models were quite rudimentary compared to what we have today. Yet a lot of tools like Roo Cline or whatever have defaulted to old, crappy models to keep costs down, but that just results in bad code. You should be using one of 1) Claude Opus, 2) ChatGPT o3, or 3) Google Gemini 2.5 pro.
What do you do to get AI help with a large code base rather than a toy problem?
Two things mainly:
-
Have a good prompt that has the nuances of the crappy, antiquated setup my work is using for their legacy systems. I have to refine this when it runs into the same sorts of errors over and over (e.g. thinking we're using a more updated version of SQL when we're actually using one that was deprecated in 2005).
-
Play context manager, and break up problems into smaller chunks. The larger the problem that you're getting AI to do, the greater the chance that it will break down at some point. Each LLM has a certain max output length, and if you got even close to that then it can stop doing chain-of-though to budget its output tokens, which makes its intelligence tank. The recent Apple paper on the Tower of Hanoi demonstrated that pretty clearly.
Twitter and Reddit both allow you to sort chronologically. I've just naturally stopped using most of the ones that don't have an option like that, such as Facebook and TikTok (I never got into TikTok in the first place, I bounced off hard). I also don't think "the algorithm" is necessarily always bad -- Youtube's recommended videos have exposed me to some truly excellent creators like Montemayor over the years. Sometimes I'll watch lower quality stuff like whatifalthist and my recommended will be populated by garbage for a bit, but that resolves itself after a week or so, and I could probably speed it up by marking those videos as things I don't want.
Ublock Origin blocks basically all ads, and is quite effective. I haven't noticed shills posing as users to be that much of a problem outside of stuff like porn.
I agree that the bubble will almost certainly burst at some point, and lots of people will get burned. I strongly disagree that it's all just hype though, or that LLMs are a "scam". They're already highly useful as a Super Google, and that'll never go away now. They're generating billions in revenue already -- it's not nearly enough to sustain their current burn rates, but there's lots of genuine value there. I'm a professional software engineer, and AI is extremely helpful for my job; anyone who says it isn't is probably just using it wrong (skill issue).
You can just... not engage with most of that? There are places like Substack and this site that don't sort by popularity. You can also curate your feed to make the algorithmic sites useful. I use Twitter/X to keep up with bloggers I know, and Reddit is useful for AI updates and video game discussions. Youtube can be almost anything you want it to be as long as you subscribe to the things you like and don't subscribe to things you don't like. Just get off /r/all and Tiktok.
The main problem I have with blackpilled monk types (and this post is pretty archetypal blackpill despite claiming otherwise) is that it can work while you're younger but it has an expiration date. Eventually you'll have a crisis and medical expenses. What then? If you have no savings then you'll either need to forgo medical care or do the leech thing where you receive medical care and then simply don't pay for it. What happens when you're 60 or 70 and too old to work? If you've calculated everything and know Social Security will get you through it, then OK, that seems fine to me. You do you.
I'd still somewhat worry about peoples' (really just men's) inherent existentialism. Modern generations grow up on Disney movies that tell them life should be wonderful and meaningful, and that'll largely not be true for blackpillers. It won't be horrible overall, but they'll lack a lot of the self-actualization they think they deserve. If they're fine with that then that's OK again, but a lot of them eventually start screeching about how "the system has failed them" and how we need to "burn it all down" just because they were too foolish to make different life choices.
More options
Context Copy link