site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 10465 results for

domain:anarchonomicon.com

I use a Norelco battery-powered shaver I found lying in my driveway one day when I was getting fed up with my older Norelco corded shaver. The battery ran for two weeks, long enough for the new charger to arrive from Amazon.

God provides.

Their claim is that it is indeed a terminal goal. Here, for instance, is modern Eliezer still talking about corrigibility as a "hard problem" (and advertising his book, of course).

I agree that one of the important steps in their prophecy is that there will be a "weird random-looking utility function" - in other words, mindspace is huge and we might end up at a random point in it that is completely incomprehensible to us. (A claim that I think is looking very shaky with LLMs as the current meta.) But they ALSO claim that this utility function is guaranteed to, as you say, "place instrumental value on your continued existence". It's hard to have it both ways: that most AI minds will be crazily orthogonal to us, except for this one very-human-relatable "instrumental value" which Yudkowsky knows for sure will always be present. You're describing it in anthropomorphic terms, too.

I think "becomes the principle intellectual force developing AI" is a threshold that dissolves into fog when you look at it too closely, because the nature of the field is already that the tasks that take the most time are continuously being automated. Computers write almost all machine code, and yet we don't say that computers are rhe principle force driving programming progress, because humans are still the bottleneck where adding more humans is the most effective way to improve output. "AI inference scaling replaces humans as the bottleneck to progress", though, is pretty unlikely to cleanly coincide with "AI systems reach some particular level of intellectual capability", and may not even ever happen (e.g. if availability of compute for training becomes a tighter bottleneck than either human or AI intellectual labor - rumor in some corners is that this has already happened). But the amount that can be done per unit of human work will nevertheless expand enormously. I expect the world will spend quite a bit of calendar time (10+ years) in the ambiguous AI RSI regime, and arguably has already entered that regime early past year.

This is actually my identical routine. Trimmer if too long, then Mach 3 in the hot shower, blowing on or tapping the blade to get the clog out as needed. Can't remember the last time I cut myself.

I am not interested in obscurantist accounting jargon, I'm interested in what's actually happening in the real world. The important details, not trivia. In the real world, inference/production is profitable, while research is expensive. That is what causes the losses of these companies. They have barely began to monetize, focusing on developing a brand and a userbase because they have long time horizons.

Has your thesis been making you good returns in the real world? You're so wise and clued in about the real value of OpenAI, this failing business with 700 million weekly users, (up over 100% this year). Why haven't your Nvidia shorts been paying off? You do have skin in the game, right? There are surely so many opportunities for this key alpha to pay off for you given the huge infrastructure buildout. Or maybe it's a bit more complicated than you think, growing a userbase first and then monetizing is a thing. Maybe all these hyperscalers aren't just randomly squandering hundreds of billions 'gambling' on R&D. I have skin in the game, my money is where my mouth is, I'm enjoying my Nvidia gains.

Would you call Zuckerberg a fool for buying Instagram for $1 billion when it had no revenue? This beancounter logic doesn't work in the real world.

"Safety razor" is the double-sided type that Harry's uses. The other stuff is just cartridge razors. I got a Merkur safety razor a while back after getting fed up with the cartridge ripoff pricing and inherited a couple of vintage Gillette butterfly safety razors from my grandfather. I use the Derby blades which are cheap and good. Shortly after I switched to the Commander Riker beard with shaved cheeks and sides below mouth. I then got a Weller trimmer to keep the length at about half to 3/4 inch.

I don't use soap, and have found that just soaking my face in hot water for 30 seconds provides a good shave with minimal irritation.

That's not a true Scotsman.

While I agree that the term "recursive self-improvement" is imprecise (hell, I can just write a python script which does some task and also tries to make random edits to itself which are kept if it still runs and performs the task faster), the implied understanding is that it is the point where AI becomes the principal intellectual force in developing AI, instead of humans. This would have obvious implications for development speed because humans are only slowly gaining intelligence while LLMs have gained intelligence rather rapidly, hence the singularity and all that.

I don't think that self preservation has to be a terminal goal. If I am a paperclip maximizer, I would not oppose another paperclip maximizer. Instead, we would simply determine whom of us is better positioned to fulfill our objective and who should get turned into paperclips.

Of course, the central case of a misaligned AI the doomers worry about has some weird random-looking utility function. I would argue that most of these utility functions are inherently self-preserving, with the exception being finite tasks (e.g. "kill all humans"), where it does not matter if any agent following it is destroyed in the process of finishing the job.

If you are the only one in the world trying to do what is right according to yourself, then you will likely place instrumental value on your continued existence so that you can continue to do so, at least until you solve alignment and build a better AI with that utility function or can negotiate to have your utility function merged in a larger AI system as part of a peace deal.

A slicing motion as opposed to a scraping motion.

Nice, my first ever time getting a comment into this! Apparently the trick is for me to be just loose enough to rant on the internet, but not loose enough to start trolling and flaming...

with implausibly organized leftist violence

The book it’s loosely based on, Vineland by Thomas Pynchon, was set in the 80s. The revolutionaries were ex-Weatherman/Black Panther types. Which makes a lot more sense than an organized leftist domestic terrorist group who used to engage in direct action against... the Obama administration circa 2010???

Risky buy, too. Their T150000 are famous and infamous for Discount Brand Build quality.

No

I'm just saying you're conflating r&d and margins

And comparing r&d to the casino when so far the r&d is leading to extremely useful high margin products

I don’t see how that’s relevant. We’re talking about Presidents and how attractive they were, not who is marrying Aisha in Goatfuckistan.

Mmmh, thanks!

That doesn't really work either, with the statement that blacks have more crude status being obviously incorrect.

Hey, I use the Fusion ones, probably out of a lack of familiarity with other options. So good to have this thread.

Can I mention - I used to switch blades once every couple of uses. When I told my Dad he was mortified. Manufacturers recommend a couple more uses than that though it's personal. I find the first 2 shaves uncomfortable and go up to around 10.

Somewhere in the world right now, some unfortunate young girl is being forcibly married off to some old, toothless geezer.

Do you think it's any consolation at all that he was a real hunk several decades back?

pull along the blade

What does this mean?

I didn't get the impression at all that this scheme was mainly about physical appearances. It started making sense to me when I rephrased it in my head as being about crude status versus sophisticated virtue. Crude status includes physical appearances, but isn't solely about it. It's also a bit of a reflection of Nietzschean master morality versus slave morality, and with a bit of an implicit judgment here that status by master morality is more natural and primitive and status by slave morality is more civilized and intellectual.

But the end effect of that reasoning is that the rankings of the hottest Presidents just become a list of the Presidents that happen to have been younger when they served their terms: Kennedy, Obama, Clinton, and sometimes Bush II. Meanwhile the models and the movie stars are confined to the bottom just because they happened to be quite old when they became president.

IIRC, the usb spec has always required that any compliant usb port can withstand an infinite short circuit of any pin to any pin without damage.

Same. I used to shave myself every week with a Gilette Mach 3 (Fusion is a gimmick, and Schick Quattro is worse than a gimmick, its blade guards kept snagging on my stubble). Hated the unshaven look but was too lazy to keep my cheeks smooth. Then one summer I got some severe bronchitis and spent a month on sick leave. My wife was away at the cabin; she saw me with a beard and declared I was now complete.

My hair isn't exactly my source of pride, facial hair included, so I can't really style it into anything fancy. Just a scraggly-ish chinstrap (that I keep trimmed to avoid sporting a full-on neckbeard) and a mustache. The cheeks and the neck have sparse enough growth that a few passes with a bare trimmer keep them presentable.

That's a good point. "Dangerous" is meaningless unless it's a strong and direct effect. Perhaps "calls for something which is against my human rights". This has to actually be true, it's not enough to argue "It's an attack of my person that you don't give me special rights which suit my uniqueness".

How people interpret dangers is strongly influenced by propaganda, so if you convince group X that group Y is out to get them, group X will start attacking group Y in perceived (but non-existent) self-defense. I feel that this second part, the interpretation, is where most conflict happen. Actual value disagreements seem minor. Perhaps the value hierarchy (order of priority) is different, though.