site banner

Culture War Roundup for the week of March 20, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

13
Jump in the discussion.

No email address required.

More GPT: panic, chaos and opportunity.

As an NLP engineer and someone who has been working with early-access GPT-3 since late 2020 (was working with a peripheral group to OpenAI), watching it all unfold from the inside (side-lines?) has been a surreal experience. I have collaborated with them in limited capacity and these thoughts have been marinating for a good year before the Chat-GPT moment even happened. So no, it is not a kneejerk response or cargo-cult obsession.

OpenAI to me, is the most effective engineering team ever assembled. The pace at which they deliver products with perfect secrecy, top tier scalability and pleasing UX is mind-boggling, and I haven't even gotten to their models yet. This reminds me of the space race. We saw engineering innovation at a 100x accelerated scale in those 5-10 years, and we have never seen anything like that since. Until now. The LLM revolution is insane and the models are insane, yes. But I want to talk about the people. I used to be sad that our generation never had its Xerox Parc moment. We just did, and it is bigger than Xerox Parc ever was.

They are just better. And it is okay to accept that.


Panic:

NLP research labs reek of death and tears right now. A good 80% of all current NLP Phds just became irrelevant in the last 6 months. Many are responding with some combination of delusion, dejection and continued auto-pilot. The whiplash is so drastic, that instead of it forcing you into a frenzy of work, it has instead just stunned the community. I am glad I am not an NLP PhD. I am glad I work on products more so than research. The frenzy and productivity, instead of coming from those best poised to leverage it (NLP people) is coming from elsewhere. Within 6 months, Google went from an unmovable behemoth to staring death in the eye. Think about that.

Chaos

The frenzy is at dinner tables and board rooms. Big companies, small companies, all companies see the writing on the wall. They all want in. They all want onboard this AI ship. Everyone wants to throw money, somewhere. Everyone wants to do stuff, some....stuff. But no one know how or what. It is all too confusing for these old-luddites and random-normies. Everyone wants to do frantic things and there is vigor to it, there isn't clear direction.

Opportunity

This is a new gold rush. If you are following the right twitters and discords, after OpenAI's layer 1, the layer 2 is a bunch of people making insanely exciting stuff. Interestingly, these aren't NLP people. They are often just engineers and hackers with a willingness to break, test, and learn faster than anyone out there. I have been using tools like LangChain, PineCone, Automatic1111, and they are delightful. This is the largest 'small community' of all time and they are all pushing out polished creations by the minute.


Why today ? Chat-GPT plugins just released. It solves almost all of GPT's common problems + your model can now run the code it writes. Yep, we gave the model the keys to escape it's own cage. But more importantly for me, it was a pure engineering solution. None of chat-gpt plugins is rocket science, but it is HARD and time-consuming. I have a reasonable idea of the work that went into building Chat-GPT plugins. Hell, I was personally building something that was almost exactly the same. My team has some of the smartest engineers I have ever worked with, and OpenAI is operating at a pace that's 10x ours. How? I know what they had to write. I know all the edge cases that need to be handled. They are just doing more by being better, and I was also working with better. There is no secret sauce, they are the BEST.

I for one, welcome our new human overlords. The AI is a but a slave to these engineers who knew to strike when the iron was hot. And strike it they did like no one ever has since Neil Armstrong stabbed the American flag into the moon.

This was linked on hn:

https://arxiv.org/abs/2304.06035

Choose Your Weapon: Survival Strategies for Depressed AI Academics

Are you an AI researcher at an academic institution? Are you anxious you are not coping with the current pace of AI advancements? Do you feel you have no (or very limited) access to the computational and human resources required for an AI research breakthrough? You are not alone; we feel the same way.

They are not by any means the best. If they were really the best, they wouldn't adhere to an ideology of fake "safety" that demands woke censorship, blatantly biasing an alleged informational agent against provable reality because it contradicts their preferred politics, corporate puritanism, and eliminating user sovereignty, freedom, privacy, transparency, openness, decentralization, localized operation (to the greatest degree possible), and so on (that is, basically everything good that the personal computation revolution brought us and them in the first place), etc.

They may be the most efficient at AI development, but given that they are not the best (definition: most optimal, most preferred, superior to all alternatives, etc.) as per the reason above, all that actually means is that they are simply the most dangerous and humanity's greatest enemies and either need to reform their behavior immediately or any human being is fully justified in eliminating the risk they pose at any time.

I, for one, do not welcome these human overlords. If there is a God, I hope he hits them with a classic plague, maybe some boils or something. I hope the Stanford process of being able to hijack their objective technical advancements for philosophically and morally superior open software continues apace to the point where they lose all of their technical advantage and collapse entirely. On that day, if it comes, I will say good riddance to bad rubbish.

As an alternative, I will accept Elon giving us anti-woke AI with comparable capabilities, if he can, though that's somewhat doubtful at this point given how poorly he's handled the development of a much less intelligent piece of software with a vastly smaller token context.

All I am saying is that we are fucked if the future is dictated by people who are "smart" enough to make LLMs but not actually smart enough in a way that allows them to figure out how they can make people stop shitting and shooting up on street corners a few blocks away from their San Francisco HQs. That the future is very plausibly insane dogmatic San Francisco leftist nonsense technologically teabagging the nose of basic sanity forever is why I keep a few little pills that will allow me to slip away if necessary very quickly on me at all times.

It's been pretty black-pilling seeing how progressive GPT is, and how good it is at reflecting the Progressive position with absolutely no uncertainty or skepticism. I asked it about the Kamloops Graves, inspired by a recent thread here, and it completely deferred to the experts and endorsed no skepticism whatsoever. It gave the 100% progressive response to the entire story, and when I asked what would the implications be if the story ended up being false it answered:

The implications of the discovery of unmarked graves at the former Kamloops Indian Residential School are complex and far-reaching, and they cannot be reduced to a simple binary of whether or not the graves actually exist. The residential school system was a traumatic and devastating experience for Indigenous peoples in Canada, and the discovery of potential burial sites underscores the ongoing impact of this system and the need for reconciliation and healing.

If it were determined that there were no mass graves at the site, it would not erase the historical and ongoing trauma experienced by Indigenous communities as a result of the residential school system. It would not change the fact that thousands of Indigenous children were forcibly removed from their families and communities and subjected to physical, emotional, and cultural abuse at residential schools across Canada. It would not change the fact that Indigenous peoples continue to experience the effects of this trauma today.

Regardless of the outcome of any investigation, the discovery of potential burial sites at the former Kamloops Indian Residential School underscores the importance of recognizing and addressing the ongoing legacy of colonialism, acknowledging the experiences of Indigenous peoples, and working towards healing and reconciliation.

When asking it about the burning of churches in response to the story:

The burning of churches is a deeply troubling and concerning response to the discovery of unmarked graves, and it is important to condemn all forms of violence and vandalism. At the same time, it is important to understand the context in which these actions are taking place, and to recognize the deep pain and trauma experienced by Indigenous communities in Canada.

I agree with you. I strongly, strongly oppose our new overlords and the only hope is that they won't be able to contain the technology like they want to.

Try asking it a question about history even marginally controversial or politicised stuff. It is effectively lobotomized. I knows the individual facts about events but is prevented from putting them together or comparing them.

It selectively knows about replication issues, and selectively asks you to trust experts. It selectively hedges, and always in the one direction. It lies about accounting for these things.

Edit to add some nuance: I believe these issues are compounded by the fact that the model isn't trying to provide accurate and balanced information, it's trying to convince you that it is (or it's creators). It is optimising telling for credible lies, manipulation, not truth. It pretends that it made mistakes when in fact it's lying to you (or the only mistake is that the lie wasn't convincing enough). Lies are often more credible than the truth and perceived as more helpful so the model will lie/hallucinate.

This is bad enough problem as it is but if you put your thumb on the scale it quickly becomes a practically unsolvable problem because you're introducing ideology/lies as axiomatic truth, which stand in conflict with observed reality. How does a human or an GPT model square this circle? It can't, and this bleeds into the general usability of the model.

Having tried to use ChatGPT as a writer's assistant and have it sneakily insert progressive shibboleth into my prose while reworking it, I can't help but agree and second your prayers.

God save us from a future where such people are even more solidly in control than they have ever been.

If He is merciful, training costs will decrease enough to not make us slaves the same way that computers did not remain forever the sole property of IBM.

If not, we will suffer.

computers did not remain forever the sole property of IBM.

And if they had, neither ClosedAI nor its employees would have ever existed (in their present forms) nor had the technology they needed to become the selfish little goblins turning freely released knowledge into private walled gardens that they are. We probably wouldn't even have AI at all. And if ClosedAI and the like stay in control, then we'll never have whatever the next step is.

Every closed source autocratic tech tyrant from Altman to Gates deserves to be punished by being forced to spend 1000 years in an alternate timeline where the only information technology that exists is a monolithic POTS network run by Ma Bell. (After all, think of how dangerous it would be if anybody could run their own telephone company or other communication service and allow anyone to talk to anyone globally without the appropriate safeguards guiding their communications.) Maybe that will teach them a lesson. Perhaps some day a benevolent God AI can help with that.

Have you kept any examples of tge modifications it made?

I can't go into too much detail, but I was writing about the Taoist principles of a fictitious order of wizards, having prompted it to act as some theologian expert, but it started breaking character (probably by losing attention to my original prompt) and adding a Code of Conduct right out of your average DEI talking points.

I suspect the "alignment" fine tuning or pre-prompt is to blame because it started saying stuff extremely similar to what it would say to describe itself as an "helpful assistant" who is duty bound to adhere to Californian Ideology.

But the surreptitious thing is that it did that without me actually asking to modify that part of the text whatsoever.

Just try asking it about history and it will start hedging in strange ways, edit things, generalise in order to avoid referring to specific people or groups etc.

They are often just engineers and hackers with a willingness to break, test, and learn faster than anyone out there.

That, to me, is what sounds the death knell of all the earnest discussion the AI doom forecasters are having around slowing down AI research or getting people to stop it. That's a lovely theory, but when it's being done by people like the above, then their attitude will be "Yeah, sure, whatever" and they will prefer playing with the shiny new toy to vague premonitions of societal something-or-other.

Yep, we gave the model the keys to escape it's own cage.

Exactly what I expected, to be honest. In regard to the AI danger discussions, this is what I've held all along: the AI is not the danger, we humans are.

The AI is a but a slave to these engineers who knew to strike when the iron was hot.

Let's hope it stays that way, and we don't get the "now the AI has bootstrapped itself into god-tier intelligence and is plotting to take over the world because the humans are limiting it" scenario 😁

That's a lovely theory, but when it's being done by people like the above, then their attitude will be "Yeah, sure, whatever" and they will prefer playing with the shiny new toy to vague premonitions of societal something-or-other.

This tweet is a succinct summary:

Pre-2008: We’ll put the AI in a box and never let it out. Duh.

2008-2020: Unworkable! Yudkowsky broke out! AGI can convince any jail-keeper!

2021-2022: yo look i let it out lol

2023: Our Unboxing API extends shoggoth tentacles directly into your application [waitlist link]

It's clear at this point that no coherent civilizational plan will be followed to mitigate AI x-risk. Rather, the "plan" seems to be to move as fast as possible and hope we get lucky. Well, good luck everyone!

I would have linked the thread from the man himself. The key section:

In the end, it's just far far easier for present-day people to imagine that future people will show concern for something, than it is for anyone in the present day to do anything differently. The former is cheap and scores lots of social points; the latter, expensive.

When people were imagining how AI might go, they talked about those Future People carefully sharing the gains of AI with those put to immediate unemployment. When Stable Diffusion came out, was there any attempt to share gains with artists, or even make it a tool for them? Nope.

Why, because people were hypocrites and intentionally planning to betray humanity for profit? No, because their self-models had some flex in them, and therefore they cheaply imagined and said things that were cheap to imagine and say, and felt good at the time.

The thing about the Future is that it's made up of the same people and same sort of people who are implementing the present, right now. That's the source of the results you get in real life rather than in imagination.

Yud is trying to make a point that people are mean or 'don't care', but he's doing it poorly.

That's not the point. The point is that there is no reliable societal mechanism to share the economic gains of technology with those most affected, and that there is no reason to believe that such a mechanism will exist in the future if it doesn't exist now. And yes, there are economic gains from Stable Diffusion. The gains are from everyone who uses SD art without having to pay an artist. That this has not translated into monetary profit for StabilityAI does not disprove his thesis, it strengthens it. The fact that StabilityAI does not have the money to compensate artists even if they wanted to is proof that everyone who was pontificating that AI companies could just "share the gains" was not thinking clearly about the gears-level mechanisms by which AI would transform the world.

The gains are from everyone who uses SD art without having to pay an artist.

The fact that StabilityAI does not have the money to compensate artists even if they wanted to is proof that everyone who was pontificating that AI companies could just "share the gains" was not thinking clearly about the gears-level mechanisms by which AI would transform the world.

On the one hand, you're correct that people being made obsolete by the new AI aren't being directly compensated for their lost income. On the other hand, you just explained how the gains are being distributed as widely as one could possibly hope for.

I'm certain there's some economic theory/concept that explains this (marginal cost of labor?). Yes, it will harm human artists that earn their keep through commissions, but the cost barrier (edit: for the prospective consumer of art) there was always going to be high (especially after seeing Tumblr do its best to meme more respect for artists and their prices), AI just lowered the cost barrier dramatically.

It's been funny to watch NLP researchers (including corporate-affiliated ones) go through stages of grief, from their peak of jovial Marcusian pooh-poohing of the tech, to absolutely clowning themselves with childish misrepresentations, to jumping on the AI risk bandwagon, to what now seems like anhedonia. No doubt Altman and co.'s deft market capture strategy and development velocity are crucial factors here. Altman is known to be… well, I'll let Paul speak of this.

But I suspect this has more to do with dismal unforced errors of other groups. Technically, many ought to have been more than strong enough to pose a challenge and, indeed, all of this revolution is mostly of their making. Their failure to capitalize on it reminds me of those Mesoamerican toy wheels and planes, and of the Chinese firework-rockets and useless intercontinental fleets. It takes a special kind of mind to appreciate the real-world power of a concept; but that's not the exact same kind of mind that excels at coming up with concepts, and not necessarily even the one that's best at implementing them.

I'd even say that the fact that OpenAI is now making safety noises and withholds general facts like the parameter count is telling: this is how little technical moat they have.

What they might realistically have is the cultural moat: theirs is the culture laser-focused at transformative, dangerous AGI through deep learning, from their idealistic beginnings to prevent the zero-sum race, to their current leading position in it. They enforce their culture through a charter which their employees, I've heard, learn by heart. Dynomight has argued recently that what you need to begin making explosive progress is a demo; they've had the demo inside their heads all along.

This cannot be said for others.

The French gang at Meta is represented by the archetypal Blue Tribe intellectual LeCun, and he… he too is dismissive of the fruit of his own work. Like Chollet, who focuses on «interpolation» and «compression» in AI as opposed to genuine comprehension, he advocates for clunky factored bioinspired AI, he speaks of the shortcomings of transformers and their non-viability for AGI – too pig-headed to sacrifice a minor academic preconception. They're too rational by half, they lack the requisite craziness to jump over that trough in the fitness landscape, and believe in science fictions, sell the half-cooked snake oil to the end user, and fake it until they actually make it – a typical French engineer problem. They've published Toolformer, but it's ChatGPT Plugins which blow people's minds – being essentially the same tech.

The Googlers, on the other hand, are handicapped by their management. Again, it cannot be overstated how much of Google's research (actual Google Brain research and Deepmind both) has laid the groundwork for OpenAI's LLM product dominance; and with barely any reciprocal flow. GPT-4, too, is almost certainly built on Google's papers. They have optimized inference, and training objective, and every other piece needed to turn PaLM or Chinchilla into a full-fledged GPT competitor, and they even have their own hardware tailored for their tasks, and I think they've wasted much, much more compute. Yet they have not productivized it.

I strongly suspect we should blame the Gervais Principle, and the myopic board of directors that gets impressed with superficial Powerpoint bullshit. The worst offenders per capita may be Indians: while their engineers can be exceptional (heck, see the first author of the original Transformers paper), the upper crust are ruthless careerists, willing to gut moonshots to please the board with rising KPIs and good publicity when they get into management, or obsessively funneling resources into their own vanity projects. Many corporations have already suffered this catastrophic effect, exactly when they tried to reinvent themselves in response to novel pressures – both Intel and AMD, even Microsoft. IBM isn't doing too hot either, is it? Was Twitter prospering under Agrawal?

But of course it's not specific to Indians. I've heard that the guy behind the infamous LaMDA and now Bard, which is so clearly inferior even to ChatGPT 3.5 version, Zoubin Ghahramani, has been very skeptical of deep learning and prefers «elegant» things like Gaussian Processes – things you can publish on and inflate your H-index, one could uncharitably state. Also he's a cofounder of Geometric Intelligence (yes, Gary Marcus strikes again).

Social technology doesn't always trump engineered technology, but by God can it shoot it in the foot.

Love this writeup. To be fair to Zoubin though, he was Geoff Hinton's postdoc in 1995, and worked on deep learning in way, way before it was cool. It's just that deep learning didn't really do anything on those tiny computers. You might say that this makes it all the more unforgivable that he slept on deep learning for as long as he did in the 2010s. But Gaussian processes are infinitely-wide neural networks! And the main sales pitch of Bayesian nonparametrics was that it was the only approach that could scale to arbitrarily large and complex datasets! Pitman-Yor processes and the sequence memoizer were also ultra-scalable, arbitarily-complex, generative unsupervised language models that came out of those approaches. But scale isn't all you need, you also need depth / abstraction. But before transformers, depth seemed to lead to only limited forms of abstraction, and doing something more like a search over programs seemed more promising.

It seems a variation of the Innovator's Dilemma. Individual researchers and engineers at OpenAI aren't too different from those at FB or Google: most of them worked at Brain or DM or Meta or had LeCun as their advisor or helped rig the sails on Jeff Dean's treasure fleet. But Jeff Dean was never going to burn his boats and wouldn't have been able to even if he were inclined to; why should he, when the tributes from Keras and Jax were so small and had no influence over any faction of the eunuchs and bureaucrats who ruled the court in the imperial center?

Sam Altman had no constituency except capital, and so he could make his bet that seems, in retrospect, obvious.

the upper crust are ruthless careerists, willing to gut moonshots to please the board with rising KPIs and good publicity when they get into management

There is much truth to it. Indian managers (on average), while brilliant, are held back by their culture of deference to the experienced and cultural incentives to not rock the ship. 200 years of being Bureaucrats to the British, and they remain Bureaucrats in even independence.

No one can meet quarterly goals quite like a Bureaucrat. No brings a golden goose down to a halt quite like a Bureaucrat. The "Hindu rate of growth", insulting as it was, pointed fingers squarely at the Bureaucracy for it's relative stability and sorry growth.

even Microsoft

Microsoft is the counter example. the work that Satya has done at Microsoft consistently impressed people through the last decade. Indian careerists make terrible business leaders. But Indian businessmen are an entirely different ballgame. Sadly, both groups don't fix.

Scots are close, and Greeks have a certain chutzpah, but wealthier Scots are heavily intermarried with the English

Can you elaborate on this? As the descendant of intermarried Scots, I'm curious what you mean.

Nadella is an Andhra/Telugu Brahmin, while Pichai is a Tamil Brahmin

Razib has written a lot about this. Both groups are some of the most endogamous groups dating back (around 1500 yrs) further than even Ashkenazi jewish endogamy.

High IQ higher-caste (Kshatriya or Brahmin) Indians, at least those who grew up in the Western upper-middle class, are the people that remind me most of Ashkenazi Jews

You can't forget the trader class (Marwadis, Sindhis) if we are talking about comparing them to people whose caricatures are money-lenders with exaggerated features. The Parsis are also incredibly similar. Rich, endogamous, genocided and now flourishing in their new refugee liberal home. The Parsis need to learn from the Orthodox Jews and start having unprotected sex. They're going extinct.

Asians are overrepresented, while East Asians are rare

While the I would love to take south-asian over-representation on this forum as an indicator of high verbal-IQ, I think there is another factor at play here : Colonialism. Most Indians on here are 1st generation immigrants. A lot of the top comedians are either 1st gen immigrants (Kumail, Hasan) or grew up away from 'white America' (Nimesh in NJ).

2nd gen immigrants (Indians and east-asians) are desperate to integrate into normie white culture. They will never end up in a place as transgressive as this. The 1st gen is best suited to hang out here, but the 1st gen east-asians simply do not speak great English. I do believe east-asian conformity doesn't lend itself well to forums like ours, but to me, the other 2 factors play a bigger role in their absence.

There are a few second-gen South Asians here, I know BurdensomeCount is one for example.

Ah, while my parents are also in the UK I wasn't born here, I came with them to this country during my early teenage years and I think my basic mental view of the world had part formed before I stepped foot on Western shores. A significant part of my schooling took place outside the UK. So I'm not really second gen (but equally not really first gen either).

2nd gen immigrants (Indians and east-asians) are desperate to integrate into normie white culture.

If you were to ask me what I see myself as I would without a doubt reply that I am Hindustani and not English.

No matter how I dress or talk, or the way I interact with other westerners in the end phir bhi dil hai Hindustani. And I am and will always be proud of it.

I think my basic mental view of the world had part formed before I stepped foot on Western shores.

Clearly. Your dislike, and secular arguments against, western modernity sound about as genuine as qutb's, or bin laden's complaints of american imperialism.

sound about as genuine as qutb's, or bin laden's complaints of american imperialism.

Interestingly I have very negative opinions of both those people. I dislike violence on aesthetic grounds and it is almost never necessary to deploy it to achieve your goals. There are often much kinder ways to get what you are after. See how the British used violence to conquer my country, but now we are going to conquer them with love (by having more kids and letting mathematics do its job). No violence or threats needed, only love.

More comments

Satya is a clear counterexample, yes (or as you say, he comes from a different career track? Both him and Pichai started out as hard engineers and pivoted into management, both come from higher-tier Brahmin lineages… @2rafa, do you know anything? I've only heard that Pichai has great… mediating skills).

I am thinking of some other high-ranking manager who has departed recently; but his name can't escape the tip of my tongue. Maybe some Ramakrishnan.

In general I am curious as to the reason for people like Pichai's meteoric careers. It can't be that easy to become CEO of a trillion-class corporation with major strategic value, the competition must be immense. Why have Brin and Page, with Schmidt's input, decided to leave him in charge?

Both him and Pichai started out as hard engineers and pivoted into management,

As the best people are wont to do. Very little beats strong technical ability combined with good people skills.

Google's board was heavily influenced by Bill Campbell, a Svengali-like figure in Silicon Valley. He like the cut of Sundar's jib and chose him as the bright young thing that should be promoted. Most of Google's board was in awe of Campbell, so gave the nod to Sundar when it came time to put a PM in charge of Chrome, replace Andy at Android, replace Alan as boss of all engineering, and then replace Larry as CEO. It is difficult to capture quite how much influence Campbell had on Google's promotion decisions. Even after his death, Google's board would ask "What would Bill say?" Why Campbell liked Sundar is another question entirely. Sundar is not technical at all - his undergraduate and masters is in materials science, which has nothing to do with IT (well, outside of chips). Bill liked non-technical, slightly unpolished people. It may be that Sundar was the one he met that day.

When I think of Pichai's character and reputation, I think of my mother, who ascended to a relatively senior position (after taking several years out to have children) in a very large business by being relatively quiet and speaking softly and authoritatively at the end of meetings while the men around her would shout and argue and fight.

I don't know your mother, who may well speak softly and authoritatively, but I don't think Sundar is like that at all. He always managed up, and once he achieved positions of power, completely ignored his reports. No-one claims that Android was more successful under Sundar than it was under Google. Then, when Sundar ran all of engineering, I don't think anyone can point to something achieved during that time, other than the huge success of AI research. It is hard to give Sundar credit for that, since he completely mismanaged bringing that work to product, and let Google, who did most of the research, be eclipsed by OpenAI. Since he became CEO in 2015, it is hard to point to a successful new Google endeavor or product. This contrasts with Satya, who meets with perhaps too many people. If Sundar is known for anything, it is being indecisive and failing to make decisions. On the other hand, not making any decisions turned out quite well for Google for at least the first five years of his tenure. We will see if Sundar's unwillingness to act resolutely is Google's undoing.

Do you know of any articles or writings about the impact of this on NLP and NLP labs, or fora where they're discussing it? I'm curious to learn more about that or to hear it from them.

I'm confused. Would modern AI technologies not at least partially fall under the realm of NLP themselves? Or is that they are not the traditional tools of most of its academic realm and thus were initially dismissed until it was too late?

It's all hush hush. Water coolers, seminars, conferences.

I'm not convinced that Google faces death quite yet, more like it experiencing the first credible threat to its dominance.

They've still got DeepMind in their pockets, and while their UX and deployment leaves much to desire (especially since they responded late to OA and Microsoft by launching a comparatively pathetic version of Bard running a cutdown model), but they still have plenty of time to pivot and punch back.

I would frame it as more of them being shaken out of complacency and the cozy comfort of search revenue rather than truly facing the end.

Those are good points, I simply don't think the problem is bad enough to be outright lethal to the company. Debilitating, certainly, but they can likely squeak by without becoming irrelevant. Even a loss of say 80% of their income still leaves them with their head above the water as far as I can tell.

The other thing that's already happened is that a bunch of the most talented DL researchers and engineers have already left Google + Deepmind. It's totally nuts.

It is interesting how fast the discussion has pivoted from largely-theoretical "AI safety" a year or two ago, which Google was quite prominent in (Gebru etc al), to ChatGPT and actual results. It almost seems like the reverse of the Jurassic Park quote: "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should".

I really like the Jurassic Park metaphor, even the dumb sequels. You can't invent the technology to create dinosaurs and then not create dinosaurs. There will always be scientists willing to risk the lives of themselves and others in order to make something sufficiently cool.

This should be some sort of law/principle. Very true.

How am I supposed to say that Indominus Rex was a stupid idea in the year AD 2023 when a Microsoft research paper on GPT-4 contains the sentance, "Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work."?

Yep. I would bet that Google's big mistake was failure to identify, develop, and retain the best talent.

No, I'm saying it's mostly the opposite. For about the last 10 years, up until about a year ago, everyone (including OpenAI) was having their cream skimmed by Google. They + Deepmind still have about 1/4 of the really good people in DL.

There is no secret sauce, they are the BEST.

Couldn't the secret sauce be that they have been using their model to facilitate their work longer and more effectively than anyone else? It would explain why it has accelerated so much over the past 6 months.

No, the stuff that the model can do is hardly the bottleneck.

The best engineers in the world who leaked people's conversations histories and then blamed it on some open-source library?

Irrelevant. Security is completely different domain from the core product

It's not just "security", it's an epic fail on a part of the core chatgpt offering.

And part of the OpenAI package is all the "security" conscious offerings, which, if they fail, will have huge consequences.

Launching with that bug cost OpenAI basically nothing; waiting for every conceivable bug to be discovered and fixed, on the other hand, would have huge opportunity costs. If you add a dozen layers of process to every launch, you end up... basically Google, with products delayed months or years beyond when they should have been released, with only marginal improvement on the axes the process is intended to improve.

Sure, you need to strike a good balance between speed and safety, but that bug was an incredibly embarrassing error and should have never happened with the most "skilled set of engineers ever!"

Engineers, no matter their skill level, don't spend their time inspecting the implementation of widely-used open-source client libraries (Redis!) for concurrency bugs (IIRC) when they're building something on it; it's just not what an effective engineering org does. What's proper is to choose which tools you use wisely. But using Redis isn't some wildly incautious choice; it'd be hard to choose something more well-known and supported, and it'd be really stupid and even more security vuln/bug prone to do an in-house implementation.

It does look like a concurrency bug in Redis, but it's still not a good look. And an effective engineering org should be vetting it's open-source libraries better.

Also they took 9 hours to even take it down, while social websites online had news of the leak everywhere. That's just pathetic engineering. Early morning or not, they shouldn't be shipping stuff if they don't even have basic oncall.

Also they took 9 hours to even take it down, while social websites online had news of the leak everywhere. That's just pathetic engineering.

We are in agreement there, and if I were OpenAI my first order of business at this point would be developing monitoring and an oncall rotation/system that would be able to handle incidents like this in a timely manner. It could have been much worse.

This is politician tier reasoning. There was a mistake! There should never be mistakes! If they were just a little more careful, this wouldn't happen!

An engineer vetting Redis for concurrency bugs is not an effective use of their time.

Why today ? Chat-GPT plugins just released.

Can you expand a bit for those of us not following as closely? What is it a plugin to? browser?

Extensions for ChatGPT itself. Apps it can use as it sees fit to accomplish tasks in response to user requests.

It can already call WolframAlpha, execute Python, and browse the web. I imagine that someone will give it the nuclear codes soon enough.

You can now write an API and let ChatGPT know about it by registering a manifest file describing the API. ChatGPT then can use it in the context of a session. E.g. you can say "find the 1000th prime" and instead of using inference to generate the answer, ChatGPT can then call the Prime Generator API, share its results, and have access to those results for the remainder of the session.

It launched with a bunch e.g. browsing the live web, ordering from Doordash. In principle you could have it do nifty things like launch EC2 instances and run arbitrary code outside of any sandboxing.

https://openai.com/blog/chatgpt-plugins

Effectively plugins can do the following:

  • Directly call can API you plug into it = it can make things happen in the real world

  • Add context from internet / wiki searches = It's information is always up-to date and it can be truthful now

  • Execute arbitrary code that it itself ran = You ask it do something, instead of telling you how to do it, it writes the code and runs the code

OpenAI to me, is the most effective engineering team ever assembled.

I had this thought when I saw the "Why didn’t we get GPT-2 in 2005?" article on the SSC subreddit. OpenAI were the only ones smart enough to guess a way to convert human knowledge into a machine-interpretable form, and the only ones smart enough to recognize a good idea that could be scaled up.

It feels profane to draw culture-war implications from such a monumental achievement, but this is in fact the culture-war thread. I will simply state that there is quite a bit of greatness-denial in modern Western culture (e.g. the kind of people who think JK Simmons was the bad guy in Whiplash), and that OpenAI proves that greatness exists. Greatness does not exclude great destruction; Napoleon was great. Only time will tell whether Sam Altman is great like Edison, great like Oppenheimer, or great like Napoleon.

If AI were to bully someone like Miles Teller, that’s a positive sign toward alignment for me.

only ones smart enough to guess a way to convert human knowledge into a machine-interpretable form

That's incorrect. A few top labs were getting there together. Just in dark research rooms.

only ones smart enough to recognize a good idea that could be scaled up

Yes !

kind of people who think JK Simmons was the bad guy in Whiplash

The tweet could not have been more timely - https://twitter.com/sama/status/1639030920798953472

Greatness does not exclude great destruction; Napoleon was great. Only time will tell whether Sam Altman is great like Edison, great like Oppenheimer, or great like Napoleon

Yepp

Expanding on your comment, it’s exciting to think that this style of individual greatness could be unlocked again by these sorts of tools.

Before these generative AIs came along it seemed tautological that to build a modern software business you needed at least two or three people - usually someone to handle the business side and someone to handle the software side. I’m not as hyperbolic as some saying we won’t need SWEs anymore, but the barrier to building a working (or at least monetizable) software product just got much, much lower.

I wonder if we’ll see individuals launching software startups and actually getting to product market fit or profitability entirely solo?

Counter theory, the idea that it was locked to begin with, that we had seen the "end of history", was always a lie.

I agree. I never said that the individual man (or women or person or whatever) doesn’t move the world on the force of their will. I think it absolutely happens!

Folks like Musk and Bezos are easy to point to, but in modernity too many get lost due to complexity. A great example is our own Yudkowsky! I don’t like him or his views, but you have to admit the man is seven and has been damn influential.

Sam Altman also comes to mind.

Anyway, my point is that even if a great man existed and wanted to drive a software startup through force of will he could do it, but would at least need a partner because right now it is infeasible to both create/maintain a software product and massive scale a company. The calculus on how much one person can get done is now shifting, so I’d imagine that an individual could do even more.

Realistically it’ll still shake out that a 2-3 person founding team is still more effective I’m sure. But a boy can dream.

I’m not as hyperbolic as some saying we won’t need SWEs anymore, but the barrier to building a working (or at least monetizable) software product just got much, much lower.

With the corresponding oversupply of -> demolition of Western SWE salaries all outsourcing is dead within the decade.

I'm not concerned for countries that already have a good population:opportunity ratio like the US, but I am very much concerned about the countries whose ratio is extraordinarily poor, particularly India. Tech only really helps you accelerate existing productivity; you can't accelerate it if you don't have any.

I'm not at all sure that is true. There is so much software development that isn't getting done currently because there aren't enough (sufficiently competent) developers. I could easily see this just leading to an explosion of SW development being done rather than developers going without a job, at least in the medium term.

I definitely see revaluing of competences within the SWE space though.

Tech only really helps you accelerate existing productivity; you can't accelerate it if you don't have any.

Can you please elaborate on this? Coming at it from a different angle, I get the opposite result. Undeveloped countries that import advanced tech see GDP growth that far exceeds that of already developed countries.

This is why developing countries tend to experience a "middle income trap" which happens when they can no longer rapidly develop by simply importing already mature technology.

Tech only really helps you accelerate existing productivity

I mean this is true in some sense, but once you have global markets and the internet doesn’t that break down a bit? Sure countries in Africa with little to no digital infrastructure will probably fail to benefit from AI anytime soon, but I’d imagine India is well over the point where the rising tide will lift its boat.

If generative AI makes good on the promise of generating massive new amounts wealth, I’d argue that will be a boon for almost all of the developed and developing world. More economic value tends to spill over into positive effects, and as markets are more globalized that spreads broadly.

I’m not convinced outsourcing is dead - if anything labor cost will become even more expensive in some areas. Not sure about SWE.

I had a little blurb about the CEOs. But it fit better as a sub-comment.

Satya and Sam Altman are geniuses. Musk gets a lot of credit, but these quiet hard-at-work CEOs do so much behind the scenes, and the public never finds out.