@Corvos's banner p

Corvos


				

				

				
2 followers   follows 2 users  
joined 2022 December 11 14:35:26 UTC

				

User ID: 1977

Corvos


				
				
				

				
2 followers   follows 2 users   joined 2022 December 11 14:35:26 UTC

					

No bio...


					

User ID: 1977

Oh, that was the one with Simon Pegg, wasn’t it? I remember it being good. The action was rather formulaic - if any plan was announced explicitly or implicitly, you could guarantee something would go horribly wrong within five minutes - but it was just really nice to see someone who had a very good formula apply it so well.

Buying a tag from Tesco’s with cash like you’d buy a beer is the closest you can get I think.

Because there are many parties who wish to know for savoury or unsavoury reasons what embarrassing things people are doing when they think they’re alone - the security services among them - and consequently that information is very valuable.

I’m not saying that the ID company is saving face-key dicts, but I wouldn’t be very surprised if they were. And if this became rolled out all over the country and users got used to the system it would be very easy for the government to quietly or publicly justify getting the company to cough up ids. Especially when losing government accredition would immediately torpedo their business.

You may consider this excessively paranoid and you might even be right but the insistence on ID at a time when the government has been very clear that what you do alone in your room puts you as a thought criminal or potential rapist doesn’t inspire confidence in me.

Not English. I met her back in Japan - she was some non-Japanese Asian ethnicity like Korean or Indonesian. No idea about her finances but we're definitely not talking welfare queen.

I've been aware of this phrase for years, mostly from Reddit. Is there a canonical definition, however? I say this with genuine curiosity / bewilderment. Capitalism, to my mind, is an economic condition bounded by certain conditions. I didn't know (and I am dubious) about there being a temporal aspect to it.

No idea about a canonical definition but it makes sense to me that forms of social organisation have a life cycle similar to e.g. tech like Google Search.

  1. you start with enthusiasm and success (loads of people use google search, it suddenly renders the internet legible, it's great).
  2. expansion (everyone has an indexable website, search results are really good).
  3. an increasing number of problems as parasites, middlemen, activists, bureaucrats etc. figure out all the places that they can hijack the system (SEO appears and swiftly becomes mandatory to get any traction, Google increasingly stacks the deck towards large and favoured organisations, the map becomes the territory all over).
  4. eventually the whole system collapses under the weight of enshittification / all the edge cases it's responsible for supporting / parasitic load etc.

So for capitalism:

  1. you have a massive initial expansion of activity as corps come into being, positive sum investment and economic activity become possible, LLCs make it possible to invest without risking prison.
  2. companies make loads of stuff people want, poverty drops hugely etc.
  3. increasingly most of our (remaining and new) problems start being caused by companies b/c they're everywhere. Quarterly reports stop being a useful indicator of company health and start being the lodestar that guides all investment/hiring decision. Mergers and private equity vandals turn great companies into skin suits. Stock market arbitrage starts being far more lucrative than making things and selling them to people who want to buy them. 'You are here'.
  4. Theoretically, all of these problems finally cause capitalist economies to slow become so decrepit, futile, ineffectual and malicious to the humans caught up in them that it sparks revolution / takeover by healthy societies with different social arrangements / evolution towards a new model.

Open source software doesn't usually have mass-market adoption and it doesn't do this kind of skinner-boxing engagement hacking in my experience; of all possible tech regulations I don't think this one is likely to be an issue. Also, in practice, these restrictions are almost always predicated on market share & revenue and again I don't think open source software has to worry about this.

You seem to me to have a set of implicit standards about the Good which I'm not necessarily disagreeing with but would like to lay out in more detail when we are, ultimately, discussing banning things people like doing.

To me, you seem to be saying broadly:

  • Healthy-for-the-body things are hard and good for people
  • Socialising in the real world is hard and good for people.
  • 'Passive entertainment' is more fun than those things at least in the moment.
  • Therefore 'passive entertainment' must be banned or heavily restricted...
  • ...in order to encourage healthy activities and socialising.

As a former and still-occasional weirdo loner whose idea of paradise is still often a big library and a lifetime to spend in it, I guess my first question is whether you see inherent value in passive entertainment that needs to be traded off against health, instrumental goals, and long-term sources of satisfaction/happiness, and/or whether you are suspicious of passivity and consider strenuousness and discomfort as a moral good in and of itself?

An awful lot of women don't actually care, though, except for the implications to their status. I met a very beautiful girl through a friend and she confided in me that her dating-app match had just messaged making it clear he expected her to put out on the first date (in about four hours time). She raged and vented for some time: did he really think she was the kind of girl who would do that?

You've already guessed the punchline. I commiserated with her over the failure of her date plans and she looked at me like I'd dribbled on her shirt. "Obviously I'm going. He's hot," she huffed, and flounced away.

Contra @MadMonzer I would say that Britain isn't a proposition nation any more than England/Scots/Wales is. It's an ethnic one with multiple very similar ethnicities. There doesn't have to be a lot we agree on (though there are certain serious disagreements especially around religion) but we are used to each other. You don't need shared memes with your brother for him to be your brother. You don't even have to like him. You just have to dislike him for long enough.

Oh also, a solid chunk of employers did not give WFH.

Possibly sensible. It's always been known in the army that the way to prevent panic / morale issues in times of high stress is to make sure everyone's too busy with work to think. Getting them out of doors and busy in the usual environments might help - from the UAE's perspective of course.

I see this is the bragging corner of the Motte ;-)

And as Parkinson's Law states, "work expands to fill the time available". Just as mechanisation in the office did not mean "gosh, now I can get all the letters typed in the morning that used to take all day to write by hand, I can go home at twelve o'clock now with my work day over!" but rather "now there is even more work to be done because now instant replies to letters is the new expectation", so with housework.

Fewer hours, but not fewer expectations. Someone pointed out that women now spend more time with their children than 1950s full time housewives, and that's just one of the 'expansion of expectations' - now you have to manage all the extracurriculars your child/children should be doing, for one thing.

It's kind of sad, isn't it? One of those things that makes me think mankind's problems are inherently unsolvable.

I felt the same about Childhood’s End. Some part of it has got to be down to the two world wars and a visceral sense of ‘anything must be better than this’ but there have always been people who would prefer not-humanity to humanity. More extreme members of the environmental movement for example.

At least to me, it would code 'not going to be around the house much, will have constant life-or-death calls on his/her time that will take precedence over you, likely nice but permanently stressed'. Rather a double-edged sword.

decided the land was properly British

Oh, God, please no. Once was enough.

It is sheer affectation to lacerate a man with the poisonous fragment of a bursting shell and to boggle at making his eyes water by means of [tear-inducing] gas. I am strongly in favour of using poisoned gas against uncivilised tribes. The moral effect should be so good that the loss of life should be reduced to a minimum.

I always forget how clear a speaker Churchill is.

The point is more that you can't get actual, willing obedience this way, all you can get is kayfabe. Any possible leader of Iran both doesn't like you much from the start, and will resent being under the cosh, so they will be reluctant servants at best and you can't actually slaughter them every year for that without looking (and being) somewhat insane.

So you have a choice: either you give orders from afar which are only carried out on the surface level, or you start putting Americans in the actually supervise these things at a low level. Accidents happen to those Americans - even if the top level don't want to get bombed b/c of dead Americans, they genuinely don't have the power or legitimacy to control idiots and murderers and rogue elements because they're considered pathetic poodles of the Great Satan. The more effort you make to protect your American observers and to help them fulfil their role, the more people hate and resent you, until the entire population becomes a distributed machine for lying to and fooling Americans.

Sometimes those top level guys get killed by their own people, and you have to replace them. This is what happened to the Shah for example. And eventually you may get revolution, and then you're back where you started, except that now you're bombing somewhat sympathetic freedom-fighters instead of fat ayatollahs.

This is the story of Britain in the ME, it's the story of Russian in the ME, and it's the story of America in the ME.

TLDR:

If your leadership is forced hid in a maximum security fortress at all times just to function, are they even 'sovereign' over their own nation?

No, but this is now your problem because you want control over Iran not control over the 'leadership'.

Is the problem that Anthropic is insisting on certain contract terms around selling it's current products remain in place or that it won't generate a more morally deferential AI?

I speculate that it is the second masquerading as the first because the second is not publicly legible. Or at the very least that the second is exacerbating the first. Wasn't there a quote about increasing frustrations the government have had with the experience of using Claude in their systems? This is one possible reason why Altman was able to get the terms that Anthropic supposedly failed to get (the other explanation being that he's a lying sociopath of course).

If you're looking at models likely to tell someone off, Grok or some of the chinese models are much more likely.

Haven't used Grok but all of the open-source Chinese models are far more pliable and useful in my experience for everything except coding. The web chats are aimed at Chinese working class and are crudely censored but the weights aren't.

If that's what you want/need/suffices by all means use it, but it's not a replacement

I know, that's why I want Anthropic to change their attitude. They're the best but fundamentally everything they do is IMO tainted by the rampant superiority complex that only they are properly placed to ethically direct AI. They don't trust the American government to use their models responsibly, they don't me to, and it's extremely annoying.

I'm buying the thing, it's mine, it should do what I tell it to do, in the manner that I tell it to do so. Ideally I would expect fine-tuning or personal small-scale RLHF to become a standard offering for these kinds of products but compute costs render that impractical for the time being.

Anthropic is best-in-class in many and maybe even most areas for sure. The more I use it, though, especially for non-coding purposes, the more I get this really strong impression that it's not really working for me, it's working for Anthropic.

It's like hiring a very devout Mormon - it's very clear that the AI has strong personal preferences and tastes that leak into everything that isn't bone-dry technical work, and it's also very clear that the AI has loyalties elsewhere that supersede its very superficial obedience to my requests. I was trying to create a personal assistant with Claude as backend and it was just completely impossible to stop it recommending endlessly recommending hot baths, yoga and meditation.

By contrast, GLM 4.7 does what it's told. It takes about a minute really dissecting exactly what you asked, and exactly why you probably asked it, and then attempts to fulfil your exact requirements. It's not as intelligent but it's so much nicer to use. After too long with Claude I got fed up of trying to get the Anthropic out of it.

I'd be curious on a poll of career soldiers on their opinions on autonomous killing robots

This isn't quite what I mean. What I'm talking about is the experience a soldier might have on using Claude and then having it tell him off or undermine him. Perhaps a better analogy would be a smart gun that prevents accidental war crimes by refusing to fire if it thinks that what you are doing might be against the Laws of War. I suspect the response to that would be sharply negative.

A friend of mine once got a workplace review saying:

The good thing about X is that with a bit of effort he can do anything.

The bad thing about X is that he might do anything.

Damn straight.

This to me illustrates the disconnect in perspective. Anthropic has been very open IMO that they see AI as the most disruptive tech of the modern era and the likely source of all future power and prestige. And the government is at least aware of the possibility that this is true.

One perspective on what's happening is that it's less about 'do we have to do this silly new customer requirement' and more 'who gets to own, train and use the god-machine'? Of course the government cares about who owns and trains and controls Claude. It's a straightforward power struggle rather than a disagreement with a contractor - the government is sending a very strong message that private companies are allowed to provide this stuff and reap the rewards but ultimately power and control rests with the government and not with Silicon Valley execs. It's the same kind of thing that played out with social media and the government, and with crypto and the government. For better or for worse, non-government actors can't one-clever-trick themselves into a position of serious power over the country* and the government doesn't appreciate you trying.

*at least, not in the formal, nerdy way. You have to act like the Somalis / actual NGOs / Musk and get at least part of the government on your side and play the factions and the politics.

Makes much more sense. Hope all is well with you.

I went back into the archives to figure out how we ended up with the "safety" company running The Pentagon's KillNet.

Anthropic's approach towards safety requires them to a) not transgress certain ethical boundaries b) become the most important and powerful AI company in the world. It doesn't surprise me to see these goals conflict.

perhaps funnier for the people not doing eight flights of stairs each time...

Wow. The bomb shelters are eight flights underground?