This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
The value in the text/images/media/any content that form the feedback comes from how modifying the software in a way guided by the feedback improves the software as judged by the people who gave the feedback (and people like them), not in the fact that content was generated by humans using the software and expressing their opinions. Generating the feedback that way through actual humans who used the software is a great way to ensure that that the feedback is valuable in this way, but I don't see why a sufficiently advanced LLM (or LLM-based tool) couldn't generate that feedback with just as much value (i.e. modifying the software in a way guided by that LLM-generated feedback improves the software as judged by the people who would have given the feedback, i.e. target audience), just by predicting the next word. And then modify the software through iterations until the feedback crosses some threshold of asking for small enough changes or something. I don't think this would be considered a "one-shot," but it certainly seems like it would require almost as little investment in human effort. It's just that the LLM-based tools don't seem sufficiently advanced (or perhaps they're not sufficiently fast?).
I don't think LLMs can generate meaningful human-like feedback of what it feels like to use the software. They just don't see the UI in the way that humans do. And it's not clear that increasing their capabilities can ever fix this.
Still, I do expect that they'll get better and better at iterating quickly and nondestructively based on your feedback, so while it won't be a fully automated dev cycle, I wouldn't be surprised if bespoke AI software replaces giant professional products eventually.
I don't see why LLMs would need to "see" the UI in a way similar to humans in order to generate meaningfully useful feedback for improving the UI (as well as any other element of the software) as judged by humans. It's not like the LLM would need to reason out "this UI element here gets in the way of this process due to that issue, etc." or "in my experience of trying to use this software in my workflow, this UI element could be improved by moving it here," or whatever. It'd be doing naked dumb pattern matching, of predicting words based on the prompt (which would include the sequence of 1s and 0s that make up the software, as well as instructions to produce text that a helpful human tester would provide, or the like) and its weights. There's no proof that this would work, but I also see no reason why simply scaling up current techniques and/or making them faster wouldn't allow LLMs to generate feedback like this which is just as useful as human user feedback.
Because it's really hard to predict how the software is going to be used, and it's not something that can be reasoned out. If that were the case, software companies with full UI teams wouldn't still be responding to user suggestions 50 years into the industry's history. Watch some of Tantacrul's videos on music notation software. He's a software developer by trade and a composer by hobby, so he has tried pretty much every major program on the market, and his video on MuseScore a few years ago resulted in him becoming the head of the development team. Music notation software is particularly ripe for this kind of criticism because it's all notoriously difficult to use and people such as myself who occasionally dabble in music have tried pretty much all of the available programs in a desperate attempt to find something that isn't going to piss us off. Highlights from the comments:
Sibelius
Finale
Dorico
Muse Score
Watch the videos. They're long, but highly entertaining. And keep in mind that he's only scratching the surface with respect to the problems he describes, and they're all either deliberate design choices or the result of being bound by the limitations of the existing codebase. I don't think you can just get an LLM to figure this stuff out.
More options
Context Copy link
I have my doubts, but you make a good point. A lot of the other emergent capabilities have been quite surprising, so there's no guarantee that this is out of the question, either.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link