@anagast's banner p

anagast

(ε)

0 followers   follows 0 users  
joined 2022 September 04 22:46:08 UTC

				

User ID: 230

anagast

(ε)

0 followers   follows 0 users   joined 2022 September 04 22:46:08 UTC

					

No bio...


					

User ID: 230

It's hard to wrangle people

Were you expecting it would be easy? Scaling from "do it yourself" to "get your team to do it" is a rare skill.

Sysadmins refuse to help with SAML setups if they don't provide all the information

From their perspective, you are scope creeping. You're asking them to take on more work, and it's not the kind of visible high-impact work that leads to increased compensation or political power. You might solve this by figuring out how to reward the work more, or by increasing the amount of coercion applied to the sysadmins, but it's unlikely the problem is resolved without one or the other.

workstation team are idiots

MBA and project manager assholes who don't know anything

Yeah, intelligence is in short supply. You can try to find some magic pixie dust that lets you hire better, or you can figure out how to factor the work such that it can be productively resolved a few rungs down the IQ ladder. PM types tend to be responsive to systematization -- "I did the X process for Y client and now we're Z% of the way through the flowchart" sounds better than "I did some bespoke work for Y client", even if it's the same work. And IME once you set up the bare bones of a structure, they'll be quick to pick it up and fully develop the process.

this Gen X asshole who always tells me I'm wrong

If this harms productivity, then you have a politically cheap justification to ask him to change. If it doesn't harm productivity, then who cares? Let him be an asshole.

The DEI stuff where completely useless people get promoted into positions they don't understand

Yeah, can't help you there. The US government effectively mandates that you hire unqualified people. If it's any consolation, your competitors are required to do the same.

I've been reading up on the same, spurred by Palladium's recent piece on a related topic.

The 1991 CRA lists the goal "to codify the concepts of business necessity", but it doesn't actually do anything to define that term. The most common legal theory I can find is "No Alternatives", which states that you can use an aptitude test as long as there's no alternative that would have less disparate impact. The actual implementation seems to be a hedge magic of best-practices, derived through the flailing of HR departments reacting to lawsuits. Critically, the burden of proof is on the business -- if you're causing a "disparate impact", you're guilty by default unless you can prove the necessity.

So, there could be room for the courts to clearly spell out a way of proving business necessity. If I were a lawyer I'd go digging for court cases where such a proof has been successful.

If you're smart enough [...] you'd be able to do magic

From an empirical perspective, this has mostly turned out to be true. Telephones, horseless carriages, haber-bosch fertilizer, insert here the same feelgood rant you've heard a thousand times. Maybe rationalists would be very different if technological progress were slowed 10x or 100x.

It's hard to predict exactly what form the magic will take, but very predictable that something about the future will feel like magic to us moderns. Probably most spaces don't have a hairline crack shortcut through the manifold -- but it only takes one.

How do you secure your position as the world makes an important technological transition? If you're politically savvy, you'll be as fine as anyone else. For the rest of us, the best bet is to be one of the builders, and that's best accomplished by neurotic high-IQ systematization. Unless you have a better suggestion?

I do feel uncertain, seeing that QC has been through all this and decided to do something else instead. He's smart, maybe he knows something I don't? My current best bet is that he's a tragic case of someone who wandered off the true path, lured by the siren song of postrat woo. But I do sometimes wonder if he's made some Kegan step that I've failed to comprehend.

Supercritical nuclear chain reactions are divided into delayed critical, where the feedback loop takes on the order of seconds to go over unity, and prompt critical, where it takes on the order of nanoseconds.

I think we've been delayed critical since Attention is All You Need -- even if OpenAI had fizzled at that point, someone else would have carried the torch. And I say we'll be prompt critical when OpenAI et al could carry on without human input.

Would the system of rocks be conscious?

Yes. You're simulating a human -- you can have a conversation with them, and ask them what they see, and they could describe to you the various hues that they perceive, or else be surprised that they are blind. They could ask where they are, and be upset to learn that they're being simulated through a pile of rocks and that you don't believe they are conscious. Anything less would be an incomplete simulation.

That's the beauty of the Turing machine, is that it's universal. Given enough time and space, even something as dumb as rule 110 can compute any other computable function. And the materialist perspective is that the human mind is such a function.

I don't think this is generally valid. What makes x86 and an ethernet cable different from grey matter and a spinal cord?

Is it separable? Ideally, obviously, but in practical terms?

If the Mayor of Random City, Guatemala needs to crack down on violent crime, I'd bet his first options are "hire some scary-looking cops" and "tell the existing cops to be more aggressive". From a bureaucratic perspective I'm not sure if there's a silver bullet that lets you beef up your police force, without also accidentally empowering some thugs.

Sure, "hire better and more competent people", if it were that easy everyone would be doing it already. I don't mean to say that violent extralegal police gangs are acceptable, just that it will take some inventiveness to remove them.

I've long accepted that any kids I have will seem at least slightly alien to me. I'm somewhat baffled that any reasonably intelligent person has had children within the last 50 years without expecting the same. Have you ever wondered what your twenty-times-great grandparents would think of your modern ways? Values change over time, even the most conservative RETVRN poasters are ideologically very different from medieval farmers -- and happily so. I don't want to dictate to future generations, any more than I'd want the ghosts of my ancestors dictating to me.

Maybe this goes wrong? Say economic doubling times continue to accelerate and the disconnect between generations grows larger than we can bear. Or maybe Aubrey de Grey wins and it goes the other way, with 200-year-old fogeys clogging up politics? It's all very uncertain to me, I'm worried about one or the other on alternate days of the week.

I see this as an instance of a broader pattern: "All changes must be Pareto improvements". Demands for higher standards are fine, but the implicit "... or else there should be no AI at all" feels destructive.

I'm a bit more curious that these princples would not feel out of place coming from a generic California university or tech company. But the Vatican? Is this Pope Francis' modernizing reformist influence? I would have expected something like "we should ensure the benefits of AI make their way to the global poor" and "we as a community need to take care of anyone displaced by technology". In particular, the "Transparency", "Reliability", and "Security and privacy" points feel odd here.

Also, I am relieved to have made it out of university before "Algorethics" becomes a freshman compsci class requirement

If you can spin up a new Terrence Tao clone for $0.05 per hour, then no human who is not more capable than Terrence Tao in some dimension can earn more than $0.05 per hour. I would create enough Terrence Taos do do all the mathematics I want, and then create some more to do my accounting. The opportunity cost is the cost of the hardware it would take to run such a model, and hardware costs are already falling exponentially even without AI electrical engineers.

Automation increases society's capacity to pay people for work, but not the economic need to do so. The robots are being built and maintained by robots. The "critical point" I describe comes when anything that a sub-120-IQ can do, can also be done by a robot for $0.05 per hour.

"A fall in the expected marginal productivity of labour to 0" is exactly what I'm talking about. Experientially, this looks like slowly raising the minimum IQ it takes to earn enough to survive until almost all people are excluded, and population massively shrinking as a direct result.

Increased labor productivity: advanced chess obituary. There comes a point where Human + AI is worse than AI alone. Rather, it's an anomaly when a human is able to meaningfully assist an AI to accomplish some task.

Both you and @Gillitrut have cited comparative advantage. I don't think that saves us here. Comparative advantage means you need to be willing to do some job for less than the cost of operating a robot to do that job. In a high-automation world, that cost will be very cheap -- the robots are building robots. What I mean by "expected economic value of a typical human goes negative" is that the price someone would be willing to pay for a human to do that job is less than the price of the resources it takes to maintain a human life.

Imagine I'm Cyberpunk Genghis Khan. I have robots that produce everything of economic value to me, including art, food, and military might. I'm keeping Mongolia as a nature preserve, and some subsistence farmers are trying to live out in some forest. Why should I let them? They produce some valuable widget, but they need to be allowed to keep at least enough farmland to keep them alive. I could have my robots build that same widget while occupying half the space.

If they press their comparative advantage, they could produce widgets with one third the space they need to live, and then die because humans require upkeep and industrial automation can drive the value of labor below that upkeep cost.

I agree that present-day EV is positive.

Humans take maintenance: food, water, medicine, education, entertainment. Even if you'll accept being a subsistence farmer in the wilderness, that costs land. I'm predicting that, post-automation, most humans will be unable to do enough useful work to pay for this upkeep. That is: anyone able to provide you with food or water or farmland, could get what they want more cheaply by paying for a robot. At that point it's economically efficient to do away with the human. That's what I'm worried about.

Yep, that's what I mean. From the perspective of a country, humans are the economically most efficient way to get most things done. E.g. China can do interesting things with 1.4b, "GDP per capita" is still a meaningful metric.

Now, say we go through an automation foom. You can grow lots of crops and feed lots of people very cheaply -- but what do you get in return? Starving the masses is a bad idea because you want to preserve your labor force -- but your labor force is all robots now, so what does it matter? All else equal, a country with a small population will outcompete a country with a large population.

I don't like this, but I'm not sure what to do about it.

The critical point is when the expected economic value of a typical human goes negative. That's when things start to go screwy, and I worry we're crossing that line soon.

Presently deriving wealth from large numbers of the lower classes is the most common route, but what if you could derive your wealth from large numbers of robots instead? Unless the aggregate poor can sell something to the aggregate not-poor, cash will flow away from the system until the population dies out.

This obviously concludes with Mongolian supremacy, since they have land to build with and fewer mouths to feed. Steppe Nomads at it again

Ultimately: low med school acceptance rates, caused by lack of residency positions, caused by lack of hospitals, caused by monopolistic pressure, caused by healthcare law. End soapbox.

Locally, I'm not very certain. Some possibilities:

  • Keep med school acceptance rates up (90% of our pre-med graduates are accepted!)

  • Keep class sizes for higher level pre-med courses down, to focus effort on the ones likely to make it

  • Honest good intentions that students not waste their time studying for a profession they'll never be able to take up

  • Administrators annoyed at all the Juniors switching majors

"it also has the reputation of being a weed-out class"

Some universities, apparently including NYU, use Organic Chemistry as a way to limit pre-med class sizes. My university's version passed around 50%, not because only 50% of students attained the Organic Chemistry ability required of an MD, but because a horde of freshmen sign up for pre-med and you need to whittle that down before they take senior classes and apply for med school. So even if this is an artifact of SAT-optional student quality, in the spirit of busting bottlenecks I'd prefer an Organic Chemistry course to have standards commensurate with every other class.

Sadly, I think this is not part of anyone's motivation to fire Jones. It's likely the adaptation pressure you describe. In my ideal world we'd have good (objective!) standardized tests as our primary gatekeeping mechanism. But for some reason, measuring ability is anathema.

Yeah, I'm in favor, but we're barely getting started and people are getting upset already. There's a lot more pointing fingers and claiming that someone somewhere will be upset, but there are some people actually upset.

The tricky part is that some mundane work and some intellectual work is easy to automate, but in many cases it's hard to tell ahead of time just how hard it will be. You can predict trucking and data entry will die off, but what will it take to crack cooking or construction or hairdressing?

And the white-collar work will be equally scattershot at approximately the same time. "It's all on the computer so it should be easy to capture inputs and outputs right?" is the kind of assumption that makes a million AI researchers' foreheads hit the desk. We'll certainly get there eventually but in this case the specifics matter.

If all the grunt work went first, we could hope for a smooth transition to post-scarcity. Sorry, we automated away your job scrubbing toilets, but on the upside production is so cheap that you can survive off your 19 twitch subs as a league of legends vtuber. But if the robot revolution happens in patches more or less randomly, then there are people with real social power that they stand to lose, and that implies chaos.

--

Separately, new tech is always a gimmick, until it's not (though I agree most gimmicks are not new tech). I wouldn't have trusted Atlas to run a parkour course, until it did. Who's to say Boston Robotics will stop short of cleaning bathrooms? It seems unlikely there will be a fundamental limit in the tech that prevents that.

I suspect substack is rather on the high end of writing. Readers go to substack because they like an author's analysis -- e.g. Scott Alexander is in no danger of being replaced with a neural net. The automation target is more mundane: news and reporting (and propaganda), marketing copy, pulp fiction / erotica, ghost writing, and other assorted filler text. Screenplays are still pretty bad but they're on the they're on the gradient descent roller coaster now and won't stay bad for long.

In the meantime, I give you GPT-3:

The car has drifted a bit to the left, so

you should turn the steering wheel to the right to correct it.

The news has drifted a bit to the left, so

they are looking for a conservative news source that is more trustworthy and less biased.

For a while I'd thought we'd automate mundane labor first. Nobody want to work in accommodation and food service, so why should they have to?

Instead, artists and writers are getting the first taste. General intelligence is moving fast enough that "how mentally challenging is it?" seems unlikely to be the critical factor, and we should instead look at "how hard is it to describe your job as a collection of inputs and outputs?". Which is, at least to me, mostly opaque for most industries.

So it won't be just the grunt work. The AI reaper will come for a scattershot of occupations across many social classes, with little respect for how much pride people take in their work, with little insight as to whether you'll be next.

I think there will be pushback. At least until the road to luxury space communism is made clear.

I generally like Noah Smith, but not this piece.

It seems he thinks leftist diversity is actually diverse, and while it's obvious in retrospect this was initially quite jarring. My greatest fear of liberalism is that it will in practice turn everything into a samey globalist liberal soup. I'd rather have an archipelago of self-assorted communities, than everything integrated everywhere.

Is a leftist political philosophy more likely to cause this result? Possible evidence against, a westerner that has gone somewhere "exotic" has likely gone somewhere more conservative. Perhaps I'm just pattern matching? Possible evidence for, the left-aligned western culture does seem to have a more quickly evolving memeplex.

Or, maybe the left-right bitching is all smoke and mirrors and really it's the authoritarian/libertarian axis of the political compass we should care about. If that's the case I can do no more than to continue screaming into the void.

The kind of attack I'm thinking of is, SovCit arguments are meaningfully distinct from nonsensical babble and incompetent/adversarial lawyers.

Nonsensical babble: you show up in court, get found incompetent, assigned a lawyer and removed from the courtroom or otherwise made to shut up. Easy to handle.

The differences between sovcits and very bad lawyers are less clear. Maybe you could treat them as an extreme case. I think Brooks is more likely than even the worst lawyer to invoke the fifth amendment when asked procedural questions, to refuse to follow judge instructions, and to "understand" anything. If Brooks has literally thousands of bad arguments to make, what will the judge do? If he starts repeating them, is someone keeping track? Brooks is representing himself pro se, he can't be disbarred.

The sane, boring, probably correct answer is to force him to accept the public defender after a few days of frustration. But they need a process for that, which is only in place because these people have made it necessary.

The overt justification would likely be that straight white etc etc are implicitly always supported by forces inherent in "the system". Politicians should be assumed to be working in the interests of such people unless explicitly stated otherwise. If pressed, I predict the most you'd be able to get out of this woman would be some suggestion that the unfavored groups "stand aside". In contrast, BIPOC LGBTQIA2S+ etc etc are always oppressed unless explicitly stated otherwise. Or so the story goes.

Of course it's all signaling. Those white Austinites support her in rational self interest, correctly expecting that should she prevail they will receive more "liberal political beliefs, vague agnosticism, supporting gay rights, thinking guns are barbaric, eating arugula, drinking fancy bottled water, driving Priuses, ...", possibly at the cost of "conservative political beliefs, strong evangelical religious beliefs, creationism, opposing gay marriage, owning guns, eating steak...".

The enumeration in the second paragraph could be accurately replaced with "Blue Tribe" except for some weird reason people prefer being called out specifically over efficiency in communication.

I like it. It's like chaos engineering / fuzz testing our legal system. Laws and the system that enforces them should be robust to adversarial attack, and by doing their best to gunk up legal proceedings SovCit shenanigans force these systems to stay honest.

The cost of this is that many people are injured, and one likely imprisoned, possibly because their misunderstanding of the law has led them to underestimate the consequences of violating it. To the extent that this is a contributing factor to the original harm, it would overcome the systemic health benefits. But I don't think "because I think I'll get away with it" is typically part of the decision making process while planning to drive into a parade.

In abstract-ideal-istan, the privileges are protection and institutions necessary to establish some independent private industry. The associated responsibilities are the taxes levied on profits made while doing so. It's not a prisoner's dilemma because the individual doesn't have the option to defect in any meaningful way.

Why would a country be willing to offer this deal? Because if they don't, someone else will, and everyone will go there.

Of course, in reality you can't reduce a country to merely a legal framework and a tax rate. There are durable illegible consequences to setting up a small business, such that emigration costs more than just lost tax dollars. And cultural dilution means there is a cost associated with immigration. If utopian progressives are ignoring something, I think it is these costs. "Dissolve all borders" passes game-theoretic and economic muster, but only if you can't see past the spreadsheet.