dr_analog
top 1% of underdog fetishists
No bio...
User ID: 583

Perhaps the problem is we haven't rejected trad lifestyles enough. Need to accelerate the gay space communism more. If we graduated from polycule cohabs to polycule cohabs with kids we can keep our depraved lifestyles while having a network of lovers around to help with the the child rearing.
Before you say there's no precedent for this allow me to bring up the Eskimo.
Eskimo didn't have much privacy huddled together in their igloos during winters and it came with somewhat corresponding social mores: intimate moments that couldn't be hidden, more partner sharing, seeming indifference to cuckolding, comfort raising kids communally. Also apparent dedicated sex parties.
We could be missing a pretty big prize here by being insufficiently freaky with our modern ways.
Isn’t it weird that a prominent justification for making money in our society is ‘sending my kids to college’?
Which makes the Marc Andreesen post about how flat screen TVs that cover your whole wall will soon cost $100 while a college education will cost $1,000,000 even scarier. Will we normalize taking out med school levels of debt for all degrees, and which people/parents will be stuck paying off for their entire lives?
Metal work is a lot harder than it looks. It's physically very tiring and you can also easily destroy yourself in a number of ways if you're not careful. I'm sure I'd immediately take up smoking and probably drinking if it was the only job left for me after programming.
Nobody seems terribly concerned about hundreds of thousands of Yemeni children dying of starvation due to the fighting there
Isn't this a proxy war between Saudi Arabia and Iran? Whose side should we take here?
Aella is a sex worker, and she is clearly being treated like shit by Eliezer. For a man who believes doom is coming, having a kid seems, at this point, frankly illogical.
Are these two sentences connected? Do they have a kid together? Not sure I understand what you're getting at otherwise
Let me see if I understand the threat model.
-
Unaligned AGI decides humans are a problem, engineers virus more infectious than measles with very long asymptomatic incubation period and 100% lethality.
-
Virus is submitted to idtdna.com with stolen payment info that the AGI hacked from elsewhere.
-
Idtdna.com processes the request, builds the supervirus, and ships it somewhere.
-
????
-
Everyone dies.
I assume you'll have a clever solution for 4.
Why do you assume the lab would synthesize any arbitrary protein? Surely they would want some confidence they're not building a dangerous supervirus?
Or are we assuming the evil AGI can submit a convincing doc with it that says it's totally benign?
Well, not just really good computational biochemistry skills? Wouldn't it also need a revolution in synbio to access an API where it input molecules and they were then produced? Where would that get sent? How do you convince people to inhale it?
Aside: I expect this synbio revolution would usher in an era of corresponding print-at-home immunity, reducing the threat vector from bespoke bioweapons. I don't expect all x-risk from weapons defense to be this symmetrical, shooting down an ICBM is much much harder than launching one for example. I would like to be as concrete as possible about the risks though.
Well what about 95% of the energy of the universe being unknown to us? We call it 'dark' as though that's some kind of explanation. Something is out there and it's far more important than everything we can see. Back in the late 19th century they thought they'd discovered all the laws of nature too. Newton got the job done, there were only a few weird puzzles about blackbody radiation and the orbit of Mercury being a bit odd. They got relativity, quantum physics, radio and so on. Our 'weird puzzle' is 95% of the universe being invisible! Either there's an immense amount of aliens or there's an extremely big secret we're missing.
Is your intuition that we're just totally missing a basic fundamental truth of the universe that fits on a cocktail napkin and if only we weren't such pathetic meat sacks we'd figure it out?
Because to me this screams "computationally irreducible" and we're not going to get traction without big simulations.
These are mostly engineering challenges that need optimization, AI can do that. It's already doing that. It's optimizing our chip layouts, it's used in controlling the plasma in fusion, it is necessary for understanding protein folding. AI is giving us the optimizing power to keep advancing in all these fields. These fields are immensely powerful! Mastering nanoscale robotics and fusion means you can start scaling your industrial base very quickly.
I believe I agree with you here? Is there a delta from your POV with my last paragraph? (Repeated below)
I recognize an AGI that was fast and coordinated and numerous could be a dangerous adversary too, but I'd like to only focus on why we think a massive increase in IQ (or rather, g) is the big x-risk.
Okay, but - take a 110 IQ person and give them a computer that runs at a petahertz. They're not gonna become Einstein. Does this imply it's hard or impossible to get to Einstein-level intelligence? Yet Einstein existed, with 99.9% of the same genetic material, and the same genetic mechanisms, as the 110 IQ person.
Not sure I follow?
But while we're here, I present to you Terence Tao. He has a 230 IQ, which is an unbelievably off-the-charts score and pushes the whole notion of IQ testing to absurdity, and he's clearly not even slightly Godlike?
Yeah! But given we will, in the next few hundred years, give AGI a core role in our datacenters and technological development?
Surely we'll have made a lot more progress on the interpretability and alignment problem by then (Context: the x-riskers, like The Yud in that Lex podcast, are arguing we need to pause AI capabilities research to spend more time on interpretability, since capabilities are drastically outpacing for their comfort)
Yeah, perhaps it's too charitable. I'm remembering him absolutely flubbing the Earth in a jar thought experiment and wanted to shake him. I would've said "right, step one, scour their internet and learn as much as we possibly can about them without any chance of arousing suspicion. step two, figure out if there's any risk they'd destroy us and under what conditions. step three, if we're at an unacceptable risk, figure out how to take over their civilization and either hold them hostage or destroy them first. boom done. okay, are we on the same page Yud? great, now here's why I think this thought experiment sucks..."
Also that discussion on steelmanning. How did that go so off the rails.
I can't believe I still have another hour to go.
I would enjoy engaging more with the AGI x-risk doomer viewpoint. I fully agree AI narrow risks are real, and AI sentience/morality issues are important. Where my skepticism lies is when presented with this argument:
-
Human intelligence is apparently bounded by our biology
-
Machine intelligence runs on machines, which is not bounded by biology!
-
Therefore, it may rapidly surpass our intelligence
-
Machine intelligence may even be able to improve its own intelligence, at an exponential rate, and develop Godlike power relative to us
-
This would be really bad if the MI was not aligned with humanity
-
We can't (yet) prove it's aligned with humanity!
-
Panicdoom
Where I have trouble is #2-4.
One variant of this Godlike argument I've seen (sorry if this comes across as a strawman, gaining traction on this debate is part of why I'm even asking) is that humans just becoming a little bit smarter than monkeys let us split atoms and land on the moon. Something much smarter than us might continue to discover fundamental laws about reality and they would similarly be Gods compared to us.
The reason I don't buy it is because we've been able to augment our intelligence with computers for some time now: by moving our thinking into computers we can hold more stuff in our head, evaluate enormous computations, have immediate recall, and go real fast. Sadly, the number of new game-changing fundamental laws of nature that have popped out of this have been approximately zero.
I believe we've discovered all of the fundamental laws of nature low-hanging fruit and the higher hanging fruit just isn't so computationally reducible: to learn more about reality we'll have to simulate it, and this is going to require the marshaling of an enormous degree of computation resources. I'm thinking less on the scale of entire data-centers in The Dalles full of GPUs and more like something the size of the moon made of FPGAs.
Stated another way, what I think holds humanity back from doing more amazing stuff isn't that we've failed to think hard and deep and uncover more fundamental truths and we could do that if we were smarter. What holds us back are coordination issues and simply the big hill to climb to boot up being able to harness more and bigger sources of energy and mine progressively stronger and rarer materials.
An AGI that wanted to do game-changing stuff to us would need to climb similar hills, which is a risk but that's not really a Godlike adversary -- we'd probably notice moon-sized FPGAs being built.
I recognize an AGI that was fast and coordinated and numerous could be a dangerous adversary too, but I'd like to only focus on why we think a massive increase in IQ (or rather, g) is the big x-risk.
Lex is also a fucking moron throughout the whole conversation, he can barely even interact with Yud's thought experiments of imagining yourself being someone trapped in a box, trying to exert control over the world outside yourself, and he brings up essentially worthless viewpoints throughout the whole discussion. You can see Eliezer trying to diplomatically offer suggested discussion routes, but Lex just doesn't know enough about the topic to provide any intelligent pushback or guide the audience through the actual AI safety arguments.
Did you know Lex is affiliated with MIT and is himself an AI researcher and programmer? Shocking isn't it? There's such a huge disconnect between the questions I want asked (as a techbro myself) and what he ends up asking.
At any given time I have like 5 questions I want him to ask a guest and very often he asks none of those and instead says "what if the thing that's missing... is LOVE?!?"
To give him the benefit of the doubt, maybe he could ask those questions but avoids them to try to keep it humanities focused. No less painful to listen to.
Hmmm. I guess I don't have a solid explanation for why committees are safer. My vibe is that committees operate by consensus and this means individual weirdnesses get sanded down in the process, thusly ensuring the outcome is more firmly within bounds.
Design by committee reduces the risk of outright bad media at the cost of some of the good.
Is is frankly astonishing to me how expensive some movies are and how few people are responsible for the artistic vision, even if it is a committee. It's even more astonishing if it's just left up to one producer. How is this kind of trust formed at all?
hmmm, not sure how much I want to update on advice from /u/FiveHourMarathon :P
It varies, and is built into the app I use
I'm 42 years old. Male. My weekly fitness regiment is
-
Madcow 5x5 (Mon/Wed/Fri). It's a barbell lifting program. Status is 280# squat (5 reps), 315# deadlift (5 reps), 165# bench (5 reps)
-
Cardio 5 1/2 hours/week. 80% Zone 2 training, 20% is VO2max training. Status: VO2max of 45. Can run about 6mph at Zone 2 for about 90 minutes, maybe more.
I trust that if I keep this up I'll continue to make strength and VO2max gains, though it's been kind of slow. It feels kind of impossible to make progress on getting my VO2max up.
If I had time to add something else in, what could I do to get biggest bang? Maybe something proven to really improve bone density? (My last DEXA showed a z-score of -1 on BMD)
All else aside, I think the trend of people that didn't have any apparent drinking problem proudly announcing that they've quit drinking and feel so much better is really weird.
One of the major points in the Huberman episode he references is that even occasional drinking has long-term negative effects on mood. This was a surprising thing to me though it prompted me to stop drinking altogether, around the same timeframe, and I can say similar things: overall a positive. It wasn't hard to do, but never tried it before as I didn't think of myself as suffering any ill effects.
I mean, I don't necessarily feel sympathy for him since he says lots of inflammatory stuff and has what looks like deranged thinking processes. But I do think if he goes down it should be over things he actually said.
I'd also say, if you don't already know that a very substantial number of black people (not all, but definitely more than a few percent) really truly do hate white people, where've you been, and have you ever really talked to black people?
The black people I consider friends don't say how much they hate white people. Biased sample perhaps!
That said, it's plausible that half of black people think it's bad to be white just not sure this survey is the one to go to production on, given the small sample and the general confusion around using a dog whistle to measure sentiment.
Scott Adams seems to have updated quite strongly on a 1000 person poll, which included black people, over the nuance of who agreed with an apparent well known(?) racist dog whistle.
He decides from here that black people hate white people and that white people must get away from them. Then he hurls some fairly ready to go insults about black people in general that I guess he was just saving for this?
He seems as... crazed ... as usual here, but I do agree he's being taken out of context. He obviously feels betrayed because he thought of himself as a fierce advocate for black people (??) but learning that all black people might still have problems with all white people completely flipped him.
I'm trying to think of a more fair headline. Maybe: Dilbert creator decides black people are hate group after reading one small poll about a racist dog whistle, cautions white people to "get the fuck away" from black people.
Wow, what country do you live in? Do you have any specific risk factors?
I'm in my 40s. My LDL-c is rising despite an absurd effort on lifestyle diet/exercise stuff. It's now at about 130. I'm trying to convince my doctor to put me on statins to reduce it but he just reiterates lifestyle. He appears to be following guidelines. But what's the risk to lowering LDL-c anyway in spite of the guidelines? Trying to understand if he is resisting this outside of "if I deviate from guidelines and something bad happens I get sued and I'm not going to do research to figure out if it's worth it in your case".
Jealousy is a form of mental illness/evolutionary baggage I have and thought leader said it was okay not to resolve that stuff before I consider your offer to join your polycule.
I'll check out the PhD thesis but still color me skeptical. My mother would tell anyone who asked that she was happily married.
Additionally, the way I know of arranged marriages is in a cultural context that includes a high degree of honor violence. So I have kneejerk disgust feelings around the whole part and parcel.
There are probably confounders out the ass here as well. Is it that the arranged marriages are higher quality, or the fact that people who practice them are a close knit tribe / large extended family with high support / super gung ho religious together / not poor and closer to dynastically wealthy?
I'll come back with an EDIT if the thesis updates me.
Ironically, I assume if you tried to get clever and farm more of your work out to ChatGPT4 behind the scenes, that the regulatory regime would detect your newfound spare cycles and expand the bullshit to consume it all.
More options
Context Copy link