TheAntipopulist
Formerly Ben___Garrison
No bio...
User ID: 373
This is a good mod action -- excessive caps lock is annoying and not helpful for a site like this.
She definitely already knew in an abstract sense. But there's a big difference between knowing about it in an abstract sense, and seeing a parade of strangers recite how much they hate a person and think she's a vile piece of trash. This is doubly true if it seems like nobody is defending her, like it's somehow a consensus that she's vile trash.
I feel really bad for her. She produces some genuinely novel insights into the workings of sex and relationships that I doubt academia could ever do. You have to have a thick skin to post on the internet, and that's doubly true for women who 1) tend to get attacks that are far more personal and nasty, and 2) tend to have higher neuroticism scores than men, so the attacks wound more deeply.
Your mod action didn't make the distinction that you were only against that part, and made it seem like you thought the entire message was AI generated.
I agree having that part at the end is sloppy... but it's sloppy to the level of "a few spelling mistakes". That shouldn't be worth modding someone over unless it becomes egregious.
This is like asking people why they like talking to friends or therapists about their life. That's what LLMs are to a lot of people -- an easy-to-access albeit somewhat low quality friend or therapist. As someone who has friends and doesn't need therapy, I also don't do that much, but I can understand why some might.
Also, LLMs are actually really good for generating NSFW if you're into that. Janitor AI with a Deepseek API hookup is excellent and quite novel.
Huh, I didn't know Ublock Origin was that granular. I use it to remove upvote numbers on this forum already, but didn't know it could be used to block YT recs. Thanks for the tip.
It would be better to have a quality filter then.
If there's one place I doubt AI will improve much in the near future, it's stakeholder management. That's why I think even if AI becomes an astronomically better coder than the average SWE, that SWE's could just rebrand as AI whisperers and translate the nuances of a manager's human-speak into AI prompts. Maybe it'll get there eventually, but we're still a good ways off from non-technical people being able to use AI to get any software they want without massive issues arising. The higher up in the org you are, the bigger a % of your job that stakeholder management becomes. I think we agree on this point overall.
On less well-known systems and APIs, I think the hallucination issue is more of a skill issue (within reason, I'm not making an accusation here). I'm translating a bunch of SQR (a niche language you've probably never heard of) queries to an antiquated version of TSQL right now, and the AI indeed hallucinates every now and then, but it's in predictable ways that can be solved with the right system prompts. E.g. sometimes it will put semicolons at the end of every line thinking its in a more modern version of SQL, and I have to tell it not to do that which is somewhat annoying, but simply writing a system prompt that has that information cuts down that issue by 99%. It's similar for unknown APIs -- if the AI is struggling, giving it a bit of context usually resolves those problems from what I've seen. Perhaps if you're working in a large org with mountains of bespoke stuff then the giving an AI all that context would just overwhelm it, but aside from that issue I've still found AI to be very helpful even in more niche topics.
On the time saved, you might want to be on the lookout for the dark leisure theory for some folks, while for others the time savings of using AI might be eaten up somewhat by learning to use the AI in the first place. I agree that the productivity boost hasn't been astronomical like some people claim, but I think it will increase over time as models improve, people become more skilled at AI, and people using AI to slack off get found out.
I agree that this stuff is becoming more and more difficult to tell apart. We even had one of our own posters get falsely accused by the mods of using AI recently. People are going to claim many things are "obviously AI" when they actually aren't, and the mania of false accusations is going to tick a lot of people off. When you're accused of using AI, not only are people saying you're committing artistic fraud, they're also implying that even if you aren't then your output is still generic trash to some extent.
I wish the Luddites would go away and we could all just judge things by quality rather than trying to read tea leaves on whether AI had a hand in creating something.
This also 100% applies to this forum's rule effectively banning AI. It's a bad rule overall.
That could have been illegal still, but R's investigating it mostly dismissed it since the eyewitnesses consistently said Joe just made chitchat and never even discussed official topics, let alone agree to do specific things. They went down checking for bribes instead since they thought that would be a more fruitful endeavor although that too uncovered nothing.
it also seems like something that (for some people) can feel like more of a productivity boost than it is due to time being spent differently
I also wonder about this. I think in particularly bad cases it can be true, since if something doesn't work it becomes very tempting to just reprompt the AI with the error and see what comes back. Sometimes that works on a second attempt, and in other times I'll go back and forth for a dozen prompts or so. Whoops, there went an entire hour of my time! I'm trying to explicitly not fall into that habit more than I already have.
Overall I'd say it's a moderate productivity boost overall even factoring that in, and it's getting slowly better as both AI models improve and my skill in using them also improves.
I dont think kickbacks to Joe personally are especially relevant?
They were the only thing that was really relevant, because that would have been fully illegal and thus an easily impeachable offense. Having family members sell access isn't illegal assuming no quid-pro-quo, it's just an optics problem.
And judging by votes
It still baffles me how people think popular = correct in terms of political arguments. Is it not well known that posting conservative opinions on a leftist-dominated forum like /r/politics would almost certainly be overwhelmed by downvotes, or vice-versa for lefty opinions to a conservative forum?
I'd like to think I'm reasonably good at coding considering it's my job. However, it's somewhat hard to measure how effective a programmer or SWE is (Leetcode style questions are broadly known to be awful at this, yet it's what most interviewers ask for and judge candidates by).
Code is pretty easy to evaluate at a baseline. The biggest questions are "does it compile", and "does it give you the result you want" can be evaluated in like 10 seconds for most prompts, and that's like 90% of programming done right there. There's not a lot of room for BS'ing. There are of course other questions that take longer to answer, like "will this be prone to breaking due to weird edge cases", "is this reasonably performant", and "is this well documented". However, those have always been tougher questions to answer, even for things that are 100% done by professional devs.
and they simply are not good at programming
At @self_made_human's request, I'm answering this. I strongly believe LLMs to be a powerful force-multiplier for SWEs and programmers. I'm relatively new in my latest position, and most of the devs there were pessimistic about AI until I started showing them what I was doing with it, and how to use it properly. Some notes:
-
LLMs will be best where you know the least. If you're working on a 100k codebase that you've been dealing with for 10+ years in a language you've known for 20+ years, then the alpha on LLMs might be genuinely small. But if you have to deal with a new framework or language that's at least somewhat popular, then LLMs will speed you up massively. At the very least it will be able to rapidly generate discrete chunks of code to build a toolbelt like a Super StackOverflow.
-
Using LLMs are a skill, and if you don't prompt it correctly then it can veer towards garbage. You'll want to learn things like setting up a system prompt and initial messages, chaining queries from higher level design decisions down to smaller tasks, and especially managing context are all important. One of the devs at my workplace tried to raw-dog the LLM by dumping in a massive codebase with no further instruction while asking for like 10 different things simultaneously, and claimed AI was worthless when the result didn't compile after one attempt. Stuff like that is just a skill issue.
-
Use recent models, not stuff like 4o-mini. A lot of the devs at my current workplace tried experimenting with LLMs when they first blew up in early 2023, but those models were quite rudimentary compared to what we have today. Yet a lot of tools like Roo Cline or whatever have defaulted to old, crappy models to keep costs down, but that just results in bad code. You should be using one of 1) Claude Opus, 2) ChatGPT o3, or 3) Google Gemini 2.5 pro.
What do you do to get AI help with a large code base rather than a toy problem?
Two things mainly:
-
Have a good prompt that has the nuances of the crappy, antiquated setup my work is using for their legacy systems. I have to refine this when it runs into the same sorts of errors over and over (e.g. thinking we're using a more updated version of SQL when we're actually using one that was deprecated in 2005).
-
Play context manager, and break up problems into smaller chunks. The larger the problem that you're getting AI to do, the greater the chance that it will break down at some point. Each LLM has a certain max output length, and if you got even close to that then it can stop doing chain-of-though to budget its output tokens, which makes its intelligence tank. The recent Apple paper on the Tower of Hanoi demonstrated that pretty clearly.
Twitter and Reddit both allow you to sort chronologically. I've just naturally stopped using most of the ones that don't have an option like that, such as Facebook and TikTok (I never got into TikTok in the first place, I bounced off hard). I also don't think "the algorithm" is necessarily always bad -- Youtube's recommended videos have exposed me to some truly excellent creators like Montemayor over the years. Sometimes I'll watch lower quality stuff like whatifalthist and my recommended will be populated by garbage for a bit, but that resolves itself after a week or so, and I could probably speed it up by marking those videos as things I don't want.
Ublock Origin blocks basically all ads, and is quite effective. I haven't noticed shills posing as users to be that much of a problem outside of stuff like porn.
I agree that the bubble will almost certainly burst at some point, and lots of people will get burned. I strongly disagree that it's all just hype though, or that LLMs are a "scam". They're already highly useful as a Super Google, and that'll never go away now. They're generating billions in revenue already -- it's not nearly enough to sustain their current burn rates, but there's lots of genuine value there. I'm a professional software engineer, and AI is extremely helpful for my job; anyone who says it isn't is probably just using it wrong (skill issue).
You can just... not engage with most of that? There are places like Substack and this site that don't sort by popularity. You can also curate your feed to make the algorithmic sites useful. I use Twitter/X to keep up with bloggers I know, and Reddit is useful for AI updates and video game discussions. Youtube can be almost anything you want it to be as long as you subscribe to the things you like and don't subscribe to things you don't like. Just get off /r/all and Tiktok.
there's nothing to be done about it.
This wasn't true for the first several hundred years of the country's existence. Heck, it wasn't even true in the 90s.
Nihilism like this is just a demotivational DDOS.
Saying "denial" is something that has gotten me warned by the mods in the past, and I was only using it in a vague general sense. You're using it as a personal attack. The moderators on this site are heavily tilted towards conservatives so I doubt anything will happen to you on that front. Still, personal attacks make me just not want to respond to people who make them.
widely known facts
I'm not sure which specific "widely known facts" you think I'm disputing, but the overall "Joe took bribes" story is disputed not only by Dems, it was completely abandoned by Republican House members since there was just nothing there despite all their fishing and their dozens of subpoenas. Filling in that hole, that there's just no evidence, with unfalsifiable claims that Joe was crafty enough to evade all detection, then claiming "it's obvious" while making personal attacks that people who disagree are naive and "in denial" is one way to go about it I suppose. Did you know Dem partisans made similar attacks when the Russia investigation failed to show much in regards to Trump's collusion? Flip the valence of what you said, how it's ludicrous to expect any sort of evidence, that Trump would never be so stupid to sign a big contract saying "I, President Trump, agree to sell out the USA to Russia", and it would sound very much like something a never-Trumper would say.
In any case I doubt we'll change each other's minds, so I'm going to drop this conversation.
Despite the uptick in political violence that the US has seen recently, political assassinations really haven't been a thing that much.
Also, people have a relatively short memory. Dems will never totally forgive Musk, but a sufficient number would probably be willing enough to tolerate his existence.
What a wonderful development. Get the popcorn.
Hopefully Musk learns what the majority of the rest of the grey tribe learned long ago: that Trump, while being useful for trashing wokeness, is broadly a thuggish buffoon. In a perfect world Musk would become an abundance Democrat, or give up on politics altogether and go back to making rockets.
- Prev
- Next
I found this to be an interesting chart.
More options
Context Copy link