site banner

Culture War Roundup for the week of May 8, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I know some of you may be sick of "wow, look at this cool thing AI can do, how eerie is that?" posts, but I can't help but be blown away by some of the applications of this technology.

I play in a band, and we upload our music to streaming platforms using a company called DistroKid. When you're uploading music, they strongly encourage you to submit the lyrics for the song at the same time for SEO reasons (and so that the lyrics will appear onscreen if someone shares the song on Instagram). It's a little textbox where you paste your lyrics in, then you're done.

I went to upload a song today, and found that they've added a new feature. After uploading the audio file, you can get an AI to try to transcribe the lyrics rather than typing them out manually.

It nailed it. The song is 215 words long, and the AI only made about 6 mistakes i.e. 97% accuracy. For reference, this is a noisy post-hardcore song with layers of shoegazey guitars, feedback and pounding drums. What's more, it achieved this 97% accuracy in a matter of seconds, maybe 5% of the total runtime of the song itself.

A few years ago I used a voice transcription service called Happy Scribe, which achieved comparable levels of accuracy - but only for plain speech recorded in a quiet environment with no background ambiance. If you record speech without a directional mic in an environment with a lot of background noise (e.g. a café), Happy Scribe was pretty much useless and I had to transcribe everything the old-fashioned way.

But now AI can transcribe words near-perfectly from lyrics with a musical accompaniment? That's insanity. I can't imagine how anyone will be able to find work as a transcriber for any major language a year from now.

Checked three random songs with whisper.cpp (ggml-medium.bin -l en): Grimes' 4ÆM, Kanye's Champion, and Juno Reactor's Angels and Men. Kanye got parsed ≈perfectly, the other two gave me lines of [MUSIC] [MUSIC] [MUSIC] and (dramatic music) (dramatic music) (dramatic music), which is fair enough but very helpful. On this note, it often interprets silence at the end of my voice notes into nonsense like «subtitle editor Lena Yeterenko, corrector T. Ilyin» or «[popular streamer] is done for today, like & subscribe», which I vaingloriously attribute to my professional diction and sexy voice despite it really being just an archetypal case of overfitting. Nevertheless, I think with a little tinkering this is all doable with basic Whisper and that's probably what runs on the backend of your service. We're 99% of the way to solving transcription.

And a bunch of other tasks, to put it mildly. Ethan Mollick recently wrote a nice piece on ChatGPT+Code Interpreter: It is starting to get strange. (My somewhat-related earlier comment).

Code Interpreter is GPT-4 with three new capabilities: the AI can read files you upload (up to 100MB), it can let you download files, and it lets the AI run its own Python code. This may not seem like a huge advance, but, in practice, it is pretty stunning.

Lets take an example: I am writing a blog post about how amazing ChatGPT is at working with code right now. I would like you to create the perfect illustration, a GIF using Python, that represents this ability. Decide what an appropriate amazing GIF would be, then figure out how to create it and let me download it. After its first attempt, I encouraged it to do something even more creative. It decided on a strategy, wrote software to enact its strategy given the constraints on its tools, executed the code, and gave me a download link to a GIF.

…So the AI shows genuine creativity in problem solving. That seems like a big deal, but not actually the big deal I want to discuss. I want to show you that Code Interpreter has turned GPT into a first-rate data analyst. Not a data analysis tool, but a data analyst. It is capable of independently looking at a dataset, figuring out what is interesting, developing an analytical strategy, cleaning data, testing its strategy, adjusting to errors, and offering advice based on its results.

An example: I uploaded a Excel file, without providing any context, and asked three questions: "Can you do visualizations & descriptive analyses to help me understand the data? "Can you try regressions and look for patterns?" "Can you run regression diagnostics?" It did it all, interpreting the data and doing all of the work - a small sample of which is below.

etc. I'm sure that a great deal of blue-collar work of this kind, work that is well paid for, is actually not very valuable, if not outright bullshit in the style of that Bernanke&Krugman joke: routine extraction of so-what observations and weaving them into platitudinous narratives for non-actionable powerpoint presentations for people who make presentations of their own to drive up the company valuation by justifying the foundation of the whole house of cards.

But some of it is valuable – immensely so. Markets have not yet recognized that we have an alien artifact on our hands. It may create unemployment, albeit that's not a given; it definitely is able to add value comparable with the bigger breakthroughs of the 20th century. And this is just about GPT-4-tier systems.

Yud of course prophesied that AIs won't be very useful until very late in the game where they rapidly become deadly (I can't be bothered right now to hunt for specific quotes). I think it's in clear sight that even just continuation of this line of tools will be sufficient to accelerate basic research to the point that we get either controlled nanotechnology or proof of its unfeasibility before evil agentic AI pulls itself up by its tentacles and killseveryone. Not to mention the possibility to maintain decently efficient local supply chains and other infrastructures even for small off-grid communities, obviating major value propositions of a nation state.

But the more likely scenario, IMO, is that «evidence» from recent and upcoming Hollywood flicks, Yud's podcasts/TED talks and obsessive fearmongering of mentally unstable people will be used to justify severe «compute governance». The Elders seem to be of the mind that useful artificial intelligence is no toy for children. You've got your guns, you've got your trucks, you've got your free press and your flags; and be happy with those inconsequential obsolete freedoms. LARP as a Butlerian Jihadist (while swiping vigorously on your personal data harvester device) or WH40K's loyal citizen (temporarily embarrassed by an onslaught of Slaanesh's forces) if that's your flavor of cope – just do not aim above your station. No «digital god» for you, little man; pray to your paper gods.