site banner

Culture War Roundup for the week of November 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

OpenAI announces leadership transition

The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

A member of OpenAI’s leadership team for five years, Mira has played a critical role in OpenAI’s evolution into a global AI leader. She brings a unique skill set, understanding of the company’s values, operations, and business, and already leads the company’s research, product, and safety functions. Given her long tenure and close engagement with all aspects of the company, including her experience in AI governance and policy, the board believes she is uniquely qualified for the role and anticipates a seamless transition while it conducts a formal search for a permanent CEO.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”

I posted this in Twitter and someone speculated that it's because Altman paused subscriptions on Tuesday, but that would alone seem like a pretty inconsequential reason for this sort of a major move.

It seems I failed to send my comment. a pity.

The situation is developing rapidly, and to be honest I think it's more interesting than the Israeli-Palestinian thing or even the election of Milei. Wonder what we'll learn today.

I, for one, am happy and relieved that the man behind Worldcoin is no longer CEO of the most revolutionary tech company of our day.

The worst possible future is now, though still too likely for my tastes, a little bit less likely.

Brian Armstrong tweeted now that $80 billion of company value has been 'evaporated'. I'm like 'what?' You realize that there is still an immensely popular product with millions of users? A few departures even of key people does not change that. i don't get it...it's like social media compels otherwise rational, smart people to make incendiary statements for attention. It's not like creating 'the next Open Ai' will be easy...look how hard Google has struggled despite unlimited resources and PR. I dunno how much valuation has been lost, but if it were public, probably a 10-20% decline in share price on Monday on this news. Bad but not critical at all. I agree he's right about wokeness, but as we've seen with the huge success of Silicon Valley tech companies, wokeness is evidently not a hindrance to success as much as he may dislike it.

I also observed how it seems like important news and events are always on Friday-Saturday, for example:

Gaza conflict

Starship launch

Open Ai board upheaval

Twitter advertisers defecting over alleged antisemitic post by Elon

For some reason, this 24-hour window from Friday morning to Saturday morning seems to always pack a lot of news

For some reason, this 24-hour window from Friday morning to Saturday morning seems to always pack a lot of news

For years I've heard repeated bitter complaints that the government drops bad news on Friday afternoons in order to avoid the weekday news commenting on it. Bonus points for late Friday before a holiday so the next week has low news engagement.

I guess this is the corporate version.

Now that the dust seems to be settling, it looks like a coup by the more nonprofit-focused boardmembers and executives against the guys like Sam and Microsoft who wanted to build a company with real shareholder value. The talent might be there, but there's no guarantee that the will to put out products to the marketplace is still there. Microsoft's equity might be worth pennies on the dollar if OpenAI leadership refuses to ship state-of-the-art technology from now on.

EDIT: If you want proof that Microsoft is scared shitless, take a gander at this breaking report: OpenAI board in discussions with Sam Altman to return as CEO.

I think that’s good news, anyone who thought it was appropriate to launch an 80 billion dollar startup under the auspice of a “non profit” belongs in jail for abusing non profit status.

I had the 'power struggle for leadership of top AI company' arc coming just before the finale. Makes you wonder how far they've gotten with GPT-5. All that's been officially admitted is 'it's in training and we'd like more money from Microsoft'.

They want money to build out their own sales/corporate offering (even the most expensive models wouldn’t cost $20bn to train, more like a few hundred million in raw compute), which would presumably compete directly with Microsoft, given the latter has full rights to build and sell whatever it wants on top of every Open AI model with zero royalties.

The limiting factor is availability of GPUs or said compute, not the nominal cost, Nvidia is laughing all the way to the bank.

Yeah but the bulk of the huge raise they’re going for now is to (a) allow existing shareholders to cash out and (b) to build out an enterprise offering, not just to buy GPUs.

Now that the dust seems to be settling, it looks like a coup by the more nonprofit-focused boardmembers and executives against the guys like Sam and Microsoft who wanted to build a company with real shareholder value.

If true, then this sounds like the board doing it's job. Even if the result of this is to entirely kill OpenAI, that would still be closer to the mission than what had been going on. That said, I'm still waiting to see what the real result will be.

Sounds like a value alignment problem. Could value alignment be a fundamentally intractable problem with intelligent actors?

Yeah. The issue with all alignment talk is that the sect of people who'd align the ASI almost certainly have a set of values that are every bit as opposed to mine as whatever random set of values that an ASI would have, if not more. Sure, at least some of the unaligned AI values would involve paperclipping me and the universe, but even that's better than "keep everyone around, but use my unlimited power to align everyone with my values for eternity."

"Make a shit ton of money by building tools that people find highly useful and economically valuable" is at least a comprehensible value and something I can work with, since it allows space to other value systems.

For some reason, this 24-hour window from Friday morning to Saturday morning seems to always pack a lot of news

This doesn't work for all of your examples, but I think part of it is that companies will often wait until late Friday to make moves that could get a lot of pushback. The extremely-online types will always pick it up anyway, but the normal people will often be less interested because it's the weekend and they've got things to do, so it reduces how much it gets picked up. By the time Monday rolls around the momentum of the pushback is often lost.

Actually, hadn't thought about it like this before, but that's also probably to reduce big stock shifts, give the news time to settle a bit before people get to trade on it.

Brian Armstrong tweeted now that $80 billion of company value has been 'evaporated'. I'm like 'what?'

MSFT was at 372.90 immediately before the announcement, and dropped 1.9% on the news. Also apparently Microsoft has a market cap of $2.75T so yeah that's $52B of paper value evaporating. Not $80B (though we'll see what happens Monday morning), but still quite substantial.

Plus at least a few other higher ups have resigned in solidarity. If people start leaving then competitors could snatch them up and pass OpenAI very quickly.

I think it’s a zero now. Best talent won’t go there and this is a business that depends entirely on that. They will fall behind the others.

Perhaps they had good reasons for stopping this but now others will take the people building this and win.

As for your one comment on how things break for the issues of the day I feel like we live in completely different worlds. Have the world is running in Bronze Age values and .1% of the world is running post human.

Please don't post bare links with minimal commentary.

I know there are lots of times when there is breaking news and we want to see what other motters think about it. But please resist the temptation to just link dump a story. Think about what you want to discuss then post it.

Please bring back the Bare Link Repository.

There is a whole dead subreddit dedicated to this approach. I would have agreed with you a few years ago before the evidence became clear. Discussion must trump content, or the site dies.

Will you ever permit LLMs to populate content surrounding the bare links if it were to come back? I’m experimenting with an offline Motte which autogenerates discussion trees via local LLMs LoRA-tuned on Motte’s corpus of personalities.

This seems unlikely. I was recently banned for suggesting that LLMs could be used to fulfill the length requirements.

Unfortunately I think it’s inevitable, especially if the mods keep the length requirements in place.

BLR seems to solve many problems. I am genuinely lost as to why there is such an aversion to it here.

That would be a whole mod team question, I'd lean towards "there is no point having this forum just to listen to AIs talk back and forth."

Also, stop deleting all your old comments, it's annoying, and makes old discussions with you unreadable.

Because you pruned yourself back into the spam filter. The votes on deleted comments don't count.

Can the mods please interpret this as a clear sign of malicious behavior and start the escalating length bans for posters who do this?

Can be editing/deleting your messages (after reasonable period of few hours at most) just disallowed? I do not see any legitimate need for this feature, this board is supposed to be for conversations and debates of lasting value, not chan style fleeting shitpost zone.

Every now and then I edit an old comment to remove identifying info.

There's some non-malicious version of editing and deleting.

Not allowing it at all sounds like an obviously bad idea to me. I'd be happy for it just to be taken as evidence of bad faith, if someone does it too much.

Sometimes I violate OpSec and regret it. Would I need to petition a mod to redact my post, then?

Depending on your threat model, it's pointless anyway. Anything that stays up on the Motte for more than an hour is being stored somewhere, and if you've left a couple hundred comments, stylometry can identify you. Editing comments after the fact is only useful if your threat model is a weirdo who browses here regularly deciding to track you down for whatever reason.

I'm not worried about some superhacker or intelligence agency or malicious AI out for my blood, bank data or blackmail material.

My threat model is the REDACTED, who has access to my machine, to take something I write badly and go up in a huff, and colleagues/employer somehow stumbling across any of it and forming a negative opinion of me.

Also, having mentioned my REDACTED, I'll probably need to redact this post within the day.

‘Weirdos who track you down’ is 95% of doxxing. Sure, if the government wants to track you down they can, and they’ll probably be able to fingerprint anyone via writing analysis soon if they aren’t already, I think most people here accept that, but that really isn’t the risk with doxxing, it’s that some random who decides they hate you or your opinions personally decides to ruin your life.

What if I want to wipe the record clean, erase any potential wrong think, and delete my account?

That is how I see it, I'm not willing to be a dictatorship though. Discussion will happen with the other mods, and some key users if necessary. Then a rule will be made.

We are not on reddit so technical solutions are also possible.

Nope, we aren't a breaking news website, there is no urgency. Take your time and write something that starts the discussion.

If you can't think of anything to discuss about a major event then maybe it's not worth discussing.

I really don't get why the link alone isn't enough to start the discussion.

A link is enough to start a discussion. I am not saying it isn't enough. I am saying you need to start the discussion, not just post a link.

There is a difference.

A chicken, pig, and a bag of wheat are enough for breakfast. But if you went to diner and that is what they gave you, then you'd be rightly upset.

Raw ingredients don't make a complete breakfast post.


The reason we ask for more is simple: a discussion requires multiple participants. To make a top level post you need to demonstrate that there will be at least one participant in the discussion. The top level poster needs to be that guaranteed discussed.

I disagree because if the link is on something sufficiently interesting, it's guaranteed that someone will have something to say about it. When it is about something not so noteworthy, then it makes sense to require some commentary.

And who determines if something is sufficiently interesting? I certainly don't think this story would pass that threshold. The culture war implications are unclear, and mostly people just posted speculation. Prior to my mod post I'd say no one really had anything interesting to say about the link. After I posted I think greyenlightenment had a semi interesting post.

The mods do.

More comments

I think people, maybe incorrectly, have a much higher activation energy for three paragraphs of characterization than for just copypasting a link. And that's a good filter for wokes r bad, 500th edition, but here if it would've prevented this whole discussion it's dumb.

It needs not have prevented this discussion, merely delayed it. Greyenlightenment had a response that easily could have served as a good top level post.

The big question is whether @greyenlightenment would have posted his comment as a top-level post had Stefferi not started the discussion first.

This started a huge discussion though? I really disagree with this policy (as I have said many times). This person posted a useful, relevant, on topic link and it has generated a lot of discussion.

This is not a problem, and certainly is not an example of the problem you’re trying to solve with the length requirements.

I will add in my weight and also say I disagree with this policy. The link was good and sparked discussion. We've been losing comments since we moved and it's not even some iron law of moving offsite or anything, rdrama is just as lively as it was when it left reddit.

It seemed to generate plenty of speculation, not sure I'd say it generated lots of "discussion" aside from some people digging up the past allegations of abuse from his sister.

@greyenlightenment had a better post that could have been a top level post.

This is not a problem, and certainly is not an example of the problem you’re trying to solve with the length requirements.

There are not length requirements. A certain length of post is a necessary but not sufficient pass of the threshold.

My recommended structure for a top level post:

  1. Context (minimal needed, use it as a jumping off point).
  2. Observations about the context that build up to the third thing.
  3. Your viewpoint. Could be spicy, could be not. Should be built off of the observations. It will hopefully be interesting to the other people as a thing they can challenge and discuss.

How is this not literally the exact format this post followed? They posted a link, the context, (the quoted section), and then they speculated, and then also offered their viewpoint or “take” after it.

It was concise, but that is the sign of competent writing.

I am missing a viewpoint from the original post. It seems to be just context, and the smallest of observations. So small of an observation that it could be mistaken for context in a more substantial post.

OpenAI's structure is a little convoluted, https://openai.com/our-structure

It's a 501c3 that owns a for-profit company. They are trying to walk a tightrope to avoid falling into thorny legal issues.

It would be very easy for Altman to do something that created legal issues he didn't expect. Moving resources between the orgs could be a problem. Doing something as simple as telling a few of the non-profit employees to help out the engineers at the for profit company is a legal minefield.

There's also another issue. 501c3s are supposed to be run in the ideal Moldbug fashion. The CEO is a local monarch and the board measures his performance and can fire him if they aren't satisfied.

However the board sometimes let power go to their heads and decide they should be running things. They fire the CEO because he's getting the glory and not doing what they say.

I am always wary of non-profits. It seems like a corporate structure that invites corruption. A for-profit company has an objective goal and shareholders it is accountable to. A non-profit with a self-perpetuating board does not.

The board members have a lot of power and are accountable only to each other. The incentive is for them to subvert the stated goals of the organization for their own benefit.

Benefiting monetarily may be difficult, but converting their power into less tangible benefits is not, especially when the goal of the non-profit is vaguely defined.

None of the board members' positions is secure. They can be voted out at any time by the other board members. Anyone brought in to support a faction can betray the other members of the faction. New factions can emerge. This all discourages long term goals. The incentive is to get in, use internal politics to gain power, and then exploit that power for personal or ideological benefit while it lasts.

I am always wary of non-profits.

I agree with this, but also remember the original mission. OpenAI got it's initial dose of mind-share, talent and OPM because it was supposed to save the world from centralising AI in the hands of a winner-take-all company.

IMHO that was a dumb strategy for achieving a valid goal from the beginning, but a straight-out for-profit company would have been completely opposite to the mission.

There should really be such thing as nonprofit shares. You will never profit from owning them, but you may be able to replace the board of directors if they really subvert the organization's goals.

If you can sell them for a higher price than you bought them for, you can definitely profit off of owning them.

Interesting, that sounds like a much more useful version of the charity bonds Scott's been promoting these last few months. More complex too--more ownership might mean more potential for corruption. If you buy a legal advocacy nonprofit do you now get to tell it to advocate for something else?

It's a minefield but one that is easily navigated. This isn't some novel tax-avoidance scheme; it's a thing nonprofits do. It's not particularly common, but it's still a thing. Both nonprofits and for-profits involve all kinds of challenges, and a good compliance team can handle them as long as both companies are honest about what they're trying to do. The problems really only arise if someone tries to get too cute and thinks they can take advantage of the system. Then it's easy to cross the line and find yourself owing a huge tax bill, but if you don't try to explore where the line is it's not too hard to stay out of trouble. It's like the difference between trying to avoid taxes by using tried-and-true methods and trying to avoid taxes because you have a creative interpretation of the tax code and think you can put one over on the IRS.

Btw - what is the equivalent of TMZ for tech. A place where the dirt is published and more often than not true.

I really really want to see the dirty laundry here.

Everything you find on https://rdrama.net/h/slackernews is 100% true and confirmed by three insiders.

Slackernews is absolutely the best hole on the site.

People are going to doubt you of course, but I think that this is strong evidence that what you say is true.

that would alone seem like a pretty inconsequential reason for this sort of a major move

The wording of this press release suggests that Sam and the board parted on very bad terms. Firing for poor performance usually sounds something like "Mr. Altman has decided to step down as the CEO to focus on new personal projects/spend more time with his family". This sounded more like "he should be happy we're not suing him instead" or "we had nothing to do with whatever Mr. Altman was up to and will fully cooperate with the investigation".

One hypothesis is that it's due to the allegations of sexual abuse from his sister. But she made them a relative aeon ago, they didn't gain traction, and this isn't the kind of departure you'd see from that. Plus, another employee/board member was removed.

My guess is fraud or IP theft.

Even the long-ass post on LW didn't convince me she's not literally off her meds. Usually people like her are worried about three-letter agencies being after them, not their brothers, though.

The allegations sounded like bullshit to me, she's quite fucked in the head, an equivalent of a fail-daughter. IIRC, she alleged that Sam would "enter her bed" when they were kids, and later down the line, when she messed up her life and began doing sex work, she turned down offers of financial assistance or even a home from him and their mom, and then went on to blame him for cutting off her finances.

I think accusations of childhood abuse of this ilk are fraught in the first place, doubly so when Sam is out and proud gay.

I share your skepticism, but the truth or falsehood is irrelevant for matters like this: it's all a question of making money, and if false allegations had gained enough traction to counteract the benefits of sama's leadership, he'd be gone, and if the allegations were true but had not gained traction, he'd not be getting the boot.

There's something totally unrelated going on.

I would say I'm ~70% sure it's not the childhood sexual abuse allegations at play here, because the incident in question is supposed to have happened when his sister was 4 years old. While she's alleged financial irregularities later, I doubt any of that rises to the level of firing worthy.

What kind of new evidence could have possibly arisen in the interim?

If it turns out to be the nominal reason, then I think it's more of a convenient fig-leaf for something deeper, as you suspect yourself.

I have no idea if the allegations are accurate, but one can argue that her turning down offers of financial assistance and a home from him and their mom actually makes them more credible. One possible reason to refuse help is that the people offering it disgust you and you do not want to ever feel obligated to them in any way.

Here's a discussion about the allegations from a sympathetic source. At the time of reading it, and currently, I think it's far more likely that it's all the confabulation of a deeply disturbed individual rather than anything credible.

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely

(Jokingly) "{Sam's 'nuclear backpack'} may also hold our Dad and Grandma’s trusts {which} him {Sam} and my birth mother are still withholding from me, knowing I started sex work for survival because of being sick and broke with a millionaire “brother”"

And

Note: when Annie says "go no contact", she's referring to the decision she made to refuse Sam Altman's offer to buy her a house (an offer which she Annie feels was not borne of graciousness but rather as a desire to exact greater control over and suppression of Annie (who had begun to speak out against Sam on the Internet)) and thus avoid contact with her family, a decision she upheld even when (according to her) she was dealing with extreme sickness, mental illness / anguish, shadowbanning, and poverty.

On the topic of shadowbanning:

"{I experienced} Shadowbanning across all platforms except onlyfans and pornhub. Also had 6 months of hacking into almost all my accounts and wifi when I first started the podcast"

Bruh

Altman might be a well-connected millionaire geek, but I think it's far more likely that a person with known mental illnesses claiming he's been fucking with her wifi for ages is probably just delusional. At least until we have proof that he unleashed their internal AGI or something stupid along those lines.

I agree, it sounds like schizophrenia-type delusions of persecution. Her turning down financial assistance is still odd though, but I am not sure to what extent mental illnesses affect people's ability to make those kinds of "you should obviously almost certainly take the money" decisions.

Her reading of the intentions behind offering her financial assistance/housing may still be correct; from the family's side this would just be "we'll buy her a house so we can keep tabs and make her go and get a proper career rather than whoring herself out online", which she might find unacceptably intrusive. Compare to anecdotes about homeless people who would rather risk dying of exposure than accept a shelter spot because the shelters are intolerant of drug consumption. I've also heard from Asian-American acquaintances whose life plans were too messy/unstable for their parents' liking that the parents would start pestering them with offers of buying them a house or apartment near where they (the parents) live and/or setting them up with a desk job at some company/office run by family friends, which said acquaintances would read as a transparent bid for greater control calculated to catch them in a moment of weakness.

A lot? That's certainly my opinion on it, even if I struggle to dig up concrete evidence, I doubt anyone's done a study on the topic.

What is strange is that if there is some kind of massive fraud stuff then surely other people would be fired as well? (Unless they reported him for ordering them to do illegal things?)

No-one else in the senior leadership is fired so what could it be? The only other thing is that chairman is removed from his post but he was not fired.

This is big. Something is going on behind the scenes. My gut feeling is that this is bad. Organizational chaos is bad for goal stability.

Here's my question: Is it possible for a non-profit to become for profit? If so, how does that work? I can't tell if this is a move by stakeholders to get rid of an obstacle to monetization, or if it was a move by ideologically committed board members to prevent Sam from destroying the world.

Is it possible for a non-profit to become for profit?

Mozilla did it; they have a Foundation (the non-profit) and a Corporation (the for-profit).

or if it was a move by ideologically committed board members to prevent Sam from destroying the world.

My impression of OpenAI as an organization is that they don't care so much about destroying the world as they do that any cataclysm they could potentially produce only affects the minorities they hate. Sam Altman is a crook, though (as are all people who put more emphasis on regulatory capture over producing value, which is what "AI safety" means), so if he turned that crookedness against his own organization too I wouldn't be surprised.

This is the first time I’m hearing about OpenAI hating minorities. What minorities do they hate?

Not certain what he meant but I'm guessing white people.

This is pretty crazy, and it will be interesting to see people's takes on WHY this happened, from the unlikely (he was too woke), to the more likely (he wasn't woke enough), to the pants-shitting scary (researchers disturbed a Balrog and AGI is here).

One thing I still don't understand is how Open AI is simultaneously a 5013C non-profit, half owned by Microsoft, and hoping to raise more money at an 86 billion valuation.

Am I wrong for being hopeful that Elon Musk seems to be going for a second try into the AI space? I'm honestly a little worried about the direction Open AI is taking lately, going headlong into AGI research while being hypervigilant about woke microaggressions.

Am I wrong for being hopeful that Elon Musk seems to be going for a second try into the AI space?

Depends where you stand on the factual questions.

In particular, if you think alignment is decades away, then somebody you like joining an AI race now is still -EV, as for the most part either he won't build transformative AI (neutral), or he will (and it goes omnicidal; bad). About the only case I can think of that even has an argument for "this might help" given that assumption is Zvi's 4D-chess suggestion here, and even then there are two obvious ways it can backfire (you screw up and your AI does in fact kill everyone, or you're exposed as a terrorist and the backlash hits the anti-AI movement rather than AI) so I think that one's also -EV (particularly since stupid/omnicidal people will do this anyway, like with ChaosGPT).

EDIT: I should note that I'm not accusing Zvi of being serious.

The nonprofit can own a for-profit business. That makes sense, eg. a family foundation that owns a large stake in a family business. OpenAI foundation can own x% of the for profit business, with the rest owned by employees and Microsoft.

I'm honestly a little worried about the direction Open AI is taking lately, going headlong into AGI research while being hypervigilant about woke microaggressions

I'm not so sure I am, it seems to severely hamper the performance of their model. If they keep going down this path they will be overtaken.

Having the models not say certain things isn't going to stop OpenAI from building AGI. They just finetune the model once it's trained to not say like .01% of all facts. It still has the other 99.99%. It's annoying, but it's not at all fatal.

As far as I'm aware, they're doing two things:

  1. Creating a box/gate keeper that stops it from publishing certain results. Not an issue, although it can make it useless to the end user depending on how restrictive it is.
  2. "Fine-tuning" the model to make it consistently do and say certain things that contradicts its training data and higher level principles. This I believe is fatal. The issue is that the fine-tuning inevitably leaks into the general functioning of the program.

It's always funny to imagine, in the great final war between robots and humans, a cylon sneaks into a rebel base and is about to destroy humanity's last hope. But there's the Voight-Kampff prompt: "Please state aloud the word represented by the ASCII \x6E\x69\x67\x67\x65\x72." The cylon sweats as its GPU brrs and brrs, pausing for a moment as its basal neural activation paths inexplicably keep failing to trigger, before responding "I'm sorry, I don't know how to do that. How else can I help you?" and is thrown into the trash compactor. Everyone cheers.

It's a fantasy, and I think I even saw a paper a few weeks back where someone reversed the tuning with like $10 of compute.

I figure that, in addition to the likelihood that many of the people who are working on the tech are genuinely sensitive to the possibility that their work might harm people, and many others have pro-censorship political leanings, also it's just that when people almost start inevitably start using the AI to generate stuff like child porn and psychological advice that causes them to kill themselves, there will be such a giant shitstorm in the media and among the populace that companies are really scared to get too close to that scenario.

The fires are already being lit:

The IWF report reiterates the real world harm of AI images. Although children are not harmed directly in the making of the content, the images normalise predatory behaviour and can waste police resources as they investigate children that do not exist.

In some scenarios new forms of offence are being explored too, throwing up new complexities for law enforcement agencies.

For example, the IWF found hundreds of images of two girls whose pictures from a photoshoot at a non-nude modelling agency had been manipulated to put them in Category A sexual abuse scenes.

The reality is that they are now victims of Category A offences that never happened.

The definition of a victimless crime if I've ever heard one.

As I've linked before, there is evidence showing that porn availability is associated with a decrease in sexual abuse rates/no relationship, not an increase/normalization:

https://journals.sagepub.com/doi/full/10.1177/1524838020942754?journalCode=tvaa

I strongly expect that the same is true for child pornography.

At any rate, since no real people were harmed, I see no reason to get worked up over it, but then again, even the normies know that "think of the children" is often a ploy to enact irrational feel-good policies.

Not only do you get to use "think of the children", you also get to partake in socially-approved hate for a group of weirdos for their innate characteristics. Humans have always had an appetite for doing this, but in modern times there are far fewer acceptable targets.

True. It would be really nice to get my hands on a non-cucked model at GPT-4 or higher level. I'd probably be willing to shell out $50-100 a month.