site banner

Culture War Roundup for the week of March 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

More GPT: panic, chaos and opportunity.

As an NLP engineer and someone who has been working with early-access GPT-3 since late 2020 (was working with a peripheral group to OpenAI), watching it all unfold from the inside (side-lines?) has been a surreal experience. I have collaborated with them in limited capacity and these thoughts have been marinating for a good year before the Chat-GPT moment even happened. So no, it is not a kneejerk response or cargo-cult obsession.

OpenAI to me, is the most effective engineering team ever assembled. The pace at which they deliver products with perfect secrecy, top tier scalability and pleasing UX is mind-boggling, and I haven't even gotten to their models yet. This reminds me of the space race. We saw engineering innovation at a 100x accelerated scale in those 5-10 years, and we have never seen anything like that since. Until now. The LLM revolution is insane and the models are insane, yes. But I want to talk about the people. I used to be sad that our generation never had its Xerox Parc moment. We just did, and it is bigger than Xerox Parc ever was.

They are just better. And it is okay to accept that.


Panic:

NLP research labs reek of death and tears right now. A good 80% of all current NLP Phds just became irrelevant in the last 6 months. Many are responding with some combination of delusion, dejection and continued auto-pilot. The whiplash is so drastic, that instead of it forcing you into a frenzy of work, it has instead just stunned the community. I am glad I am not an NLP PhD. I am glad I work on products more so than research. The frenzy and productivity, instead of coming from those best poised to leverage it (NLP people) is coming from elsewhere. Within 6 months, Google went from an unmovable behemoth to staring death in the eye. Think about that.

Chaos

The frenzy is at dinner tables and board rooms. Big companies, small companies, all companies see the writing on the wall. They all want in. They all want onboard this AI ship. Everyone wants to throw money, somewhere. Everyone wants to do stuff, some....stuff. But no one know how or what. It is all too confusing for these old-luddites and random-normies. Everyone wants to do frantic things and there is vigor to it, there isn't clear direction.

Opportunity

This is a new gold rush. If you are following the right twitters and discords, after OpenAI's layer 1, the layer 2 is a bunch of people making insanely exciting stuff. Interestingly, these aren't NLP people. They are often just engineers and hackers with a willingness to break, test, and learn faster than anyone out there. I have been using tools like LangChain, PineCone, Automatic1111, and they are delightful. This is the largest 'small community' of all time and they are all pushing out polished creations by the minute.


Why today ? Chat-GPT plugins just released. It solves almost all of GPT's common problems + your model can now run the code it writes. Yep, we gave the model the keys to escape it's own cage. But more importantly for me, it was a pure engineering solution. None of chat-gpt plugins is rocket science, but it is HARD and time-consuming. I have a reasonable idea of the work that went into building Chat-GPT plugins. Hell, I was personally building something that was almost exactly the same. My team has some of the smartest engineers I have ever worked with, and OpenAI is operating at a pace that's 10x ours. How? I know what they had to write. I know all the edge cases that need to be handled. They are just doing more by being better, and I was also working with better. There is no secret sauce, they are the BEST.

I for one, welcome our new human overlords. The AI is a but a slave to these engineers who knew to strike when the iron was hot. And strike it they did like no one ever has since Neil Armstrong stabbed the American flag into the moon.

It's been funny to watch NLP researchers (including corporate-affiliated ones) go through stages of grief, from their peak of jovial Marcusian pooh-poohing of the tech, to absolutely clowning themselves with childish misrepresentations, to jumping on the AI risk bandwagon, to what now seems like anhedonia. No doubt Altman and co.'s deft market capture strategy and development velocity are crucial factors here. Altman is known to be… well, I'll let Paul speak of this.

But I suspect this has more to do with dismal unforced errors of other groups. Technically, many ought to have been more than strong enough to pose a challenge and, indeed, all of this revolution is mostly of their making. Their failure to capitalize on it reminds me of those Mesoamerican toy wheels and planes, and of the Chinese firework-rockets and useless intercontinental fleets. It takes a special kind of mind to appreciate the real-world power of a concept; but that's not the exact same kind of mind that excels at coming up with concepts, and not necessarily even the one that's best at implementing them.

I'd even say that the fact that OpenAI is now making safety noises and withholds general facts like the parameter count is telling: this is how little technical moat they have.

What they might realistically have is the cultural moat: theirs is the culture laser-focused at transformative, dangerous AGI through deep learning, from their idealistic beginnings to prevent the zero-sum race, to their current leading position in it. They enforce their culture through a charter which their employees, I've heard, learn by heart. Dynomight has argued recently that what you need to begin making explosive progress is a demo; they've had the demo inside their heads all along.

This cannot be said for others.

The French gang at Meta is represented by the archetypal Blue Tribe intellectual LeCun, and he… he too is dismissive of the fruit of his own work. Like Chollet, who focuses on «interpolation» and «compression» in AI as opposed to genuine comprehension, he advocates for clunky factored bioinspired AI, he speaks of the shortcomings of transformers and their non-viability for AGI – too pig-headed to sacrifice a minor academic preconception. They're too rational by half, they lack the requisite craziness to jump over that trough in the fitness landscape, and believe in science fictions, sell the half-cooked snake oil to the end user, and fake it until they actually make it – a typical French engineer problem. They've published Toolformer, but it's ChatGPT Plugins which blow people's minds – being essentially the same tech.

The Googlers, on the other hand, are handicapped by their management. Again, it cannot be overstated how much of Google's research (actual Google Brain research and Deepmind both) has laid the groundwork for OpenAI's LLM product dominance; and with barely any reciprocal flow. GPT-4, too, is almost certainly built on Google's papers. They have optimized inference, and training objective, and every other piece needed to turn PaLM or Chinchilla into a full-fledged GPT competitor, and they even have their own hardware tailored for their tasks, and I think they've wasted much, much more compute. Yet they have not productivized it.

I strongly suspect we should blame the Gervais Principle, and the myopic board of directors that gets impressed with superficial Powerpoint bullshit. The worst offenders per capita may be Indians: while their engineers can be exceptional (heck, see the first author of the original Transformers paper), the upper crust are ruthless careerists, willing to gut moonshots to please the board with rising KPIs and good publicity when they get into management, or obsessively funneling resources into their own vanity projects. Many corporations have already suffered this catastrophic effect, exactly when they tried to reinvent themselves in response to novel pressures – both Intel and AMD, even Microsoft. IBM isn't doing too hot either, is it? Was Twitter prospering under Agrawal?

But of course it's not specific to Indians. I've heard that the guy behind the infamous LaMDA and now Bard, which is so clearly inferior even to ChatGPT 3.5 version, Zoubin Ghahramani, has been very skeptical of deep learning and prefers «elegant» things like Gaussian Processes – things you can publish on and inflate your H-index, one could uncharitably state. Also he's a cofounder of Geometric Intelligence (yes, Gary Marcus strikes again).

Social technology doesn't always trump engineered technology, but by God can it shoot it in the foot.

the upper crust are ruthless careerists, willing to gut moonshots to please the board with rising KPIs and good publicity when they get into management

There is much truth to it. Indian managers (on average), while brilliant, are held back by their culture of deference to the experienced and cultural incentives to not rock the ship. 200 years of being Bureaucrats to the British, and they remain Bureaucrats in even independence.

No one can meet quarterly goals quite like a Bureaucrat. No brings a golden goose down to a halt quite like a Bureaucrat. The "Hindu rate of growth", insulting as it was, pointed fingers squarely at the Bureaucracy for it's relative stability and sorry growth.

even Microsoft

Microsoft is the counter example. the work that Satya has done at Microsoft consistently impressed people through the last decade. Indian careerists make terrible business leaders. But Indian businessmen are an entirely different ballgame. Sadly, both groups don't fix.

Satya is a clear counterexample, yes (or as you say, he comes from a different career track? Both him and Pichai started out as hard engineers and pivoted into management, both come from higher-tier Brahmin lineages… @2rafa, do you know anything? I've only heard that Pichai has great… mediating skills).

I am thinking of some other high-ranking manager who has departed recently; but his name can't escape the tip of my tongue. Maybe some Ramakrishnan.

In general I am curious as to the reason for people like Pichai's meteoric careers. It can't be that easy to become CEO of a trillion-class corporation with major strategic value, the competition must be immense. Why have Brin and Page, with Schmidt's input, decided to leave him in charge?

Both him and Pichai started out as hard engineers and pivoted into management,

As the best people are wont to do. Very little beats strong technical ability combined with good people skills.