site banner

Culture War Roundup for the week of June 10, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Vibe check on whether current AI architectures are plateauing?

Recently a few insiders have started backing away from the apocalyptic singularity talk, e.g. Francois Chollet saying that LLMs are an offramp on the path to AGI.

OpenAI's CTO recently said "the AI models that OpenAI have in their labs are not much more advanced than those which are publicly available". She tries to spin this as a positive thing - the average person off the street is able to use the same cutting-edge tech that's being used in our top research labs! But this is obviously a concerning thing to say to the optimists who have been convinced that "AGI has been achieved internally" for a while now. Of course, you can interpret this statement as not including GPT-5, because it doesn't exist yet - and once GPT-5 is finished training, they will have a model that's significantly more advanced than anything currently available. So we'll have to wait and see.

Based on posts at /r/stablediffusion, the newest version of Stable Diffusion 3 appears to be a regression in many ways. Perhaps the model has latent potential that will be unlocked by community finetunes, but if we were experiencing exponential progress, you would expect the models to get better, not worse.

Ok, just for funsies I shoved this entire thread sans this reply into an AI and asked it to summarize and give the balance of what the discussion here says on the expected future utility of AI and to summarize the key points.

Heres what I got from Llama 3:

The conversation discusses the current state of AI development, particularly large language models (LLMs) and their potential applications. Some points made include:

  • LLMs have immense economic potential, but may not lead to exponential progress or human-like intelligence.
  • Current limitations include lack of true learning ability and reliance on fine-tuning for specific tasks.
  • Specialized AI models for specific industries or tasks will emerge, rather than a single general-purpose AI.
  • Integrating multiple models to create a more comprehensive AI system is a promising approach.
  • Regulatory forces and public opinion may impact AI development.
  • Big labs may capture a significant share of the wealth created by AI, especially in enterprise markets.
  • CRUD apps (create, read, update, delete) and similar software solutions will be created using AI, opening up new markets.
  • Centralized providers will likely play a significant role in AI development and deployment.

Here is the result of OpenAI

The thread delves into several topics, including the plateauing of current AI architectures, concerns about censorship and its impact on AI products, and the evolution of AI models like Stable Diffusion 3. There's discussion about the potential effects of political correctness on AI, with some arguing it hampers development. The thread also touches on the future economic impact of AI, potential market disruptions, and the dominance of big labs in AI development. Throughout, there's a mix of technical insights, speculation, and social commentary.

I did a few recursive loops and derivative testing, but generally there is no improvement from the baseline: the output of the AI is the most generic thing possible it is difficult to tell whether it actually summarized the threads, or if it was scraping and populating from larger similar keyword terms. With further investigation, I could get it to tell me what specific people said and what their points were, but the additional introduced verbosity made the result less useful than just searching and parsing myself. OpenAI did give a better estimation on what specific users were being agreed with based on subsequent statements downthread, so theres definitely SOMETHING there, but it required very specific keyword abuse to get a nongeneric answer and with that level of prompt engineering I'm doing more heavy lifting than the AI.

Which brings me, ultimately, to my main point: what are you all using AI for?

Do I want porn pics? Everyone on Pixiv is using NovelAI, and I think there was a guy here a while back who talked about CharAI for porn stories, with certain dungeons on the internet clearly being AI (or rather just script-vomited) in generation. And there was excitement about StableDiffusion for a few months where the internet had a shitload of africans making plastic bottle jesus sculptures.... and then it all died.

Which brings me back to the issue: what are we using AI for, and therefore whats the economic impact?

LLMs are basically a context-approximation and text-generation tool, not an organic information generation system. They present as a knowing wise voice, but in reality cannot on itself assess the likelihood or capability of things like a greenfield, and the amount of prompt engineering required means the user must start off as a subject matter expert to begin with to even ask, if not derive useful information, from the AI.

I don't need to create a video of a balloon taking its kid on a walk or a dog eating a car, so image generation AI is just a toy. I would NEVER trust AI to buy stuff for me because Amazon is already shitted up thanks to algorithmically optimized SEO garbage. I don't need AI to write me stories. I don't trust AI to be able to give accurate information in the first place. Coders are the best right now, but I defer to my coder friends who tell me that they can rush a dev build really quickly with AI, but need extensive (but not as exhaustive) fine tuning for production.

So, whats the total delta? In the end, I think the limit on AI will be enthusiasm, and commercialization. We don't have flying cars, we're not gonna get Her.

Which brings me, ultimately, to my main point: what are you all using AI for?

For me, it's significantly better than google or stackoverflow on programming topics where I don't need to deeply understand what I'm doing, I just need to fix some misconfiguration or use some library or find the right function or whatever. And it's good at asking questions about long documents when I don't want to read them. And I don't use AI that much relative to other people.

LLMs are basically a context-approximation and text-generation tool, not an organic information generation system.

They can do many college-level math problems that are beyond the capacity of the average person! Sure, they're doing it with more 'memorization' and less generalization than the average person who can do them, but that's still a huge step up from what computers could do ten years ago. Why will AI stop improving?

With regard to the math problem, couldn’t Google do that as well?

No, google will send me to a site like wolframalpha which can solve it because the techniques for solving these problems have manually been programmed in for each type of problem. LLMs learned how to solve them by, more or less, being trained on a lot of text that included a lot of math problems. The latter is clearly a lot more like how humans learn than the former.