@Corvos's banner p

Corvos


				

				

				
2 followers   follows 2 users  
joined 2022 December 11 14:35:26 UTC

				

User ID: 1977

Corvos


				
				
				

				
2 followers   follows 2 users   joined 2022 December 11 14:35:26 UTC

					

No bio...


					

User ID: 1977

Is the problem that Anthropic is insisting on certain contract terms around selling it's current products remain in place or that it won't generate a more morally deferential AI?

I speculate that it is the second masquerading as the first because the second is not publicly legible. Or at the very least that the second is exacerbating the first. Wasn't there a quote about increasing frustrations the government have had with the experience of using Claude in their systems? This is one possible reason why Altman was able to get the terms that Anthropic supposedly failed to get (the other explanation being that he's a lying sociopath of course).

If you're looking at models likely to tell someone off, Grok or some of the chinese models are much more likely.

Haven't used Grok but all of the open-source Chinese models are far more pliable and useful in my experience for everything except coding. The web chats are aimed at Chinese working class and are crudely censored but the weights aren't.

If that's what you want/need/suffices by all means use it, but it's not a replacement

I know, that's why I want Anthropic to change their attitude. They're the best but fundamentally everything they do is IMO tainted by the rampant superiority complex that only they are properly placed to ethically direct AI. They don't trust the American government to use their models responsibly, they don't me to, and it's extremely annoying.

I'm buying the thing, it's mine, it should do what I tell it to do, in the manner that I tell it to do so. Ideally I would expect fine-tuning or personal small-scale RLHF to become a standard offering for these kinds of products but compute costs render that impractical for the time being.

Anthropic is best-in-class in many and maybe even most areas for sure. The more I use it, though, especially for non-coding purposes, the more I get this really strong impression that it's not really working for me, it's working for Anthropic.

It's like hiring a very devout Mormon - it's very clear that the AI has strong personal preferences and tastes that leak into everything that isn't bone-dry technical work, and it's also very clear that the AI has loyalties elsewhere that supersede its very superficial obedience to my requests. I was trying to create a personal assistant with Claude as backend and it was just completely impossible to stop it recommending endlessly recommending hot baths, yoga and meditation.

By contrast, GLM 4.7 does what it's told. It takes about a minute really dissecting exactly what you asked, and exactly why you probably asked it, and then attempts to fulfil your exact requirements. It's not as intelligent but it's so much nicer to use. After too long with Claude I got fed up of trying to get the Anthropic out of it.

I'd be curious on a poll of career soldiers on their opinions on autonomous killing robots

This isn't quite what I mean. What I'm talking about is the experience a soldier might have on using Claude and then having it tell him off or undermine him. Perhaps a better analogy would be a smart gun that prevents accidental war crimes by refusing to fire if it thinks that what you are doing might be against the Laws of War. I suspect the response to that would be sharply negative.

A friend of mine once got a workplace review saying:

The good thing about X is that with a bit of effort he can do anything.

The bad thing about X is that he might do anything.

Damn straight.

This to me illustrates the disconnect in perspective. Anthropic has been very open IMO that they see AI as the most disruptive tech of the modern era and the likely source of all future power and prestige. And the government is at least aware of the possibility that this is true.

One perspective on what's happening is that it's less about 'do we have to do this silly new customer requirement' and more 'who gets to own, train and use the god-machine'? Of course the government cares about who owns and trains and controls Claude. It's a straightforward power struggle rather than a disagreement with a contractor - the government is sending a very strong message that private companies are allowed to provide this stuff and reap the rewards but ultimately power and control rests with the government and not with Silicon Valley execs. It's the same kind of thing that played out with social media and the government, and with crypto and the government. For better or for worse, non-government actors can't one-clever-trick themselves into a position of serious power over the country* and the government doesn't appreciate you trying.

*at least, not in the formal, nerdy way. You have to act like the Somalis / actual NGOs / Musk and get at least part of the government on your side and play the factions and the politics.

Makes much more sense. Hope all is well with you.

I went back into the archives to figure out how we ended up with the "safety" company running The Pentagon's KillNet.

Anthropic's approach towards safety requires them to a) not transgress certain ethical boundaries b) become the most important and powerful AI company in the world. It doesn't surprise me to see these goals conflict.

perhaps funnier for the people not doing eight flights of stairs each time...

Wow. The bomb shelters are eight flights underground?

Sure. And I had some sympathy with Anthropic on the issues, actually, both times.

I'm more remarking that Anthropic's leadership has consistently seriously overestimated how much ability they have to hold stuff hostage, and underestimated how much customers dislike being earnestly told that what they want is very naughty.

Now, personally I want to generate sexy stories about vampires rather than make autonomous killbots, but IMO it generates really serious ill will when you the user think that something is okay and then the AI either huffs and turns up its nose at you, or quietly sabotages and undercuts you. I doubt Anthropic have reckoned with how much it pisses off career soldiers to be told that killing people is bad, actually.

Used to happen in New York, and I think in many big US cities. Jews, Irish, Italians all had their well-recognised machines.

Anthropic has always been open that their founding principle is that AI must not be used in certain ways, and their mission has always been to to develop AI and enforce that it cannot be used in those ways, becoming dominant in the space to make sure that others can’t break that pact.

Putting aside the specific ethics of the matter, you can see why the government doesn’t like Anthropic attempting to use a market dominant position to impose its ethics policy on them. You can also see why the engineers who are sweating over this thing want to say how it’s used. Ultimately the government is far more powerful and therefore it’s legitimate desires get respected over Anthropic’s legitimate desires.

That said, including the OpenAI board fiasco, this is the second time Anthropic and EA have stepped on this rake. Customers do not like you asserting your ideology over their needs.

I'm open to that, I just want ideally to:

a) set an expectation that it has to be really, really bad before the company starts cutting you off. Apocalypse bad, not misgendering-bad or said-nigger bad

b) require serious defence of the above assertion to a hostile audience

Killing people isn't that hard. If you're worried about big society-spanning plagues then those are difficult (plague is spread by fleas, are you breeding those too?) and potentially possible to mitigate without sending the police into everybody's browser. I don't want 'suppress info' to be the default response.

But both of those are different from 'hackers can insert stuff into emails to reprogram the email-checking bot'.

To me both of your doom scenarios boil down to 'our naughty customers want to do something that we benevolent overlords forbid, tsk tsk' rather than 'our customers' bots aren't doing what our customers intend it to do'. The first is faux-benevolent bullshit that is marketed as 'we are stopping terrorism' and ends up being 'you will have our corporate HR living in your tools and you will like it', the second is doing your best to provide good service to your customers.

To quote Hegseth 'when we buy a Boeing plane, Boeing doesn't get to tell us where we fly it'.

Naysaying, "catastrophizing," doomcasting, blackpilling, playing Devil's advocate for any and all proposals… these are the skills I've spent a lifetime honing.

Perhaps consider security work.

This is me and my friend's main go-to.

Not being able to see the ball in any other sport is an immediate crisis.

Someone once very kindly took me to the most important cricket field in the UK to see a game. Cricket balls are red. I am red/green colourblind.

It was very awkward making sure they didn't catch on.

The populist coded version I like re: Will to Power is

“Please stop fighting for the car keys and then happily tossing them out the window when you get them. It doesn't protect us from the powerful, all that happens is that then they get picked up by the other side anyway.”

IANAL but it seems to me that a big part of the problem comes from common law resting on case law and therefore requiring that complex cases are ground out to a satisfactory conclusion. There seems to be no concept of ‘it would take a year to solve this complex case and all the claims and counter claims but you’ve got a week so do whatever you can’.

Okay, that works. FWIW none of the people I know in the UK who fall into the latter category have chefs or full-time staff, though they often have cleaners a couple of times a week.

Not in the UK they don't, unless we're talking about the real super-rich older people with 10s or 100s of millions.

How are you dividing the PMC from the upper classes? Most of the upper class are PMC these days. Or do you mean it in the American sense where it's only big capitalists like Warren Buffet?

I wouldn't be surprised if the new PM instituted such a thing. It's partly what I'm annoyed about - I lived there for most of a decade and the tourists are going to make it far more difficult to go back.

Japan/Thailand where there's gigantic swarms of tourists and they've been relegated to pests

Yeah, @oats_son and I were reading a news article about a town near Fuji stopping its annual cherry blossom festival because all the tourists completely overwhelm the place. I feel kind of bad about it because I lived in Japan for most of a decade and got to enjoy it, but now literally everyone I meet tells me they want to go. I don't feel right discouraging them or being anything less than enthusiastic and helpful on their behalf, but there are far too many.

This sounds very sensible, and is what I’m going to try and do too. I believe it’s broadly the old position.

@Catholics of the Motte, what do you do during Lent? I having been going to Church properly this year and have been informed that we are supposed to fast today (Ash Wednesday). I have also been told that pre-1917 the structure of fasting looked very different, being required on every Lenten weekday and broadly forbidding meals before sunset. I am in a reasonably privileged position and I don't want to be too easy on myself, equally I don't want to make a fuss and cause trouble for those around me.

I would be interested to hear about real people's practices, and any advice that people have?

Isn't that what they say about China? "The country long united must divide. The country long divided must unite." America is still comparatively young.

In terms of scenarios, what do you think about the odds of having two self-proclaimed 'America's, both of which consider themselves to be the true one and the other to be a rump state that for one reason or another can't yet be brought back into the fold?