@JhanicManifold's banner p

JhanicManifold


				

				

				
6 followers   follows 0 users  
joined 2022 September 04 20:29:00 UTC

				

User ID: 135

JhanicManifold


				
				
				

				
6 followers   follows 0 users   joined 2022 September 04 20:29:00 UTC

					

No bio...


					

User ID: 135

ā€Ž
ā€Ž

Unfortunately, yes, small amounts of alcohol have a detectable effect on brain white and gray matter volume:

Specifically, alcohol intake is negatively associated with global brain volume measures, regional gray matter volumes, and white matter microstructure. Here, we show that the negative associations between alcohol intake and brain macrostructure and microstructure are already apparent in individuals consuming an average of only one to two daily alcohol units, and become stronger as alcohol intake increases.

Here "one unit" means 10ml of ethanol, slivovitz has 50% abv, so you're drinking 350ml ethanol per 14 days = 2.5 alcohol units per day. The paper I linked has a bunch of interesting figures (fig 3 in particular is nice), and they provide this useful comparison:

For illustration, the effect associated with a change from one to two daily alcohol units is equivalent to the effect of aging 2 years (or 1.7 years in the model that excludes individuals who consume a high level of alcohol), where the increase from two to three daily units is equivalent to aging 3.5 years (or 2.9 years in the model that excludes individuals who consume a high level of alcohol).

Going from 0 to 1 daily units doesn't have any measurable effect in that study, but going from 1 to 2 and 2 to 3 does. So with your 2.5 units/day, it's equivalent to an aging-related decrease in brain volume of around 3.75 years. Not world-ending in any sense, but still not nothing.

(I'm slightly confused by the study, since presumably the effects should depend on how long you've been drinking alcohol, and not just provide a flat decrease in brain matter, but I'm not seeing any such effect reported in the paper)

This part never usually pans out lol.

Oh it did for me, I still remember her reaction when she saw me for the first time in like 6 months: * looks at me, does a double take, eyes widen, face becomes fully red, furtive looks the whole evening *. I actually feel a little bit guilty about just how good revenge feels.

You got injured doing goddamn Kegels?! How is that even possible?

Not injuring yourself in the gym is pretty easy:

  1. warm up with 5 to 10 minutes of cardio

  2. Don't do weights close to your one-rep max

  3. Slowly increase your total weekly volume to give your tendons time to adjust

  4. Don't improvise in the gym, instead have a pre-planned routine with exact weights on an excel sheet that you stick to

  5. If you're doing complicated movement patterns like squats and deadlifts, make sure your form is correct

Get a meat thermometer and grill to 170F internal temperature. I mix 4 eggs, 2 cups panko bread crumbs, 1.3kg of extra lean beef, 3 tbsp smoked paprika, 2 tbsp garlic salt and 2 tbsp crushed oregano together into the burger paste, and they turn out great every time if grilled at around 450F. Spices don't burn if you mix them into the meat itself.

What GPT does is predict the next token. That's a simple statement with a great deal of complexity underlying it.

At least, that's the Outer Objective, it's the equivalent of saying that humans are maximising inclusive-genetic-fitness, which is false if you look at the inner planning process of most humans. And just like evolution has endowed us with motivations and goals which get close enough at maximising its objective in the ancestral environment, so is GPT-4 endowed with unknown goals and cognition which are pretty good at maximising the log probability it assigns to the next word, but not perfect.

GPT-4 is almost certainly not doing reasoning like "What is the most likely next word among the documents on the internet pre-2021 that the filtering process of the OpenAI team would have included in my dataset?", it probably has a bunch of heuristic "goals" that get close enough to maximising the objective, just like humans have heuristic goals like sex, power, social status that get close enough for the ancestral environment, but no explicit planning for lots of kids, and certainly no explicit planning for paying protein-synthesis labs to produce their DNA by the buckets.

There are a few ways that GPT-6 or 7 could end humanity, the easiest of which is by massively accelerating progress in more agentic forms of AI like Reinforcement Learning, which has the "King Midas" problem of value alignment. See this comment of mine for a semi-technical argument for why a very powerful AI based on "agentic" methods would be incredibly dangerous.

Of course the actual mechanism for killing all humanity is probably like a super-virus with an incredibly long incubation period, high infectivity and high death rate. You can produce such a virus with literally only an internet connection by sending the proper DNA sequence to a Protein Synthesis lab, then having it shipped to some guy you pay/manipulate on the darknet and have him mix the powders he receives in the mail in some water, kickstarting the whole epidemic, or pretend to be an attractive woman (with deepfakes and voice synthesis) and just have that done for free.

GPT-6 itself might be very dangerous on its own, given that we don't actually know what goals are instantiated inside the agent. It's trained to predict the next word in the same way that humans are "trained" by evolution to replicate their genes, the end result of which is that we care about sex and our kids, but we don't actually literally care about maximally replicating our genes, otherwise sperm banks would be a lot more popular. The worry is that GPT-6 will not actually have the exact goal of predicting the next word, but like a funhouse-mirror version of that, which might be very dangerous if it gets to very high capability.

Why the hell would a dictatorial rƩgime do a false-flag revolution? They risk making people believe that protesting the government actually won't get you killed, so lots of normal people will join the false protests... Turning them into real protests. The number 1 rule for a dictator is to prevent the creation of common knowledge about how many people don't like you. Any appearance of large scale protests is incredibly dangerous to this end.

yes, you will have to lie for those conversations, or say stuff like "only pain will come out of this discussion, I don't want to know your past, and you don't want to know mine". Also increase your SMV so that no one would actually expect you to be a virgin. Go to the gym, get good clothes, haircut, etc. etc.

Or:

Her: "Do you have a girlfriend?"

You: "well, it kind of depends on your definition... but I don't kiss-and-tell smirk "

You haven't exactly said you did or didn't have a girlfriend, and now you're letting her imagine wild scenarios on her own. Mystery is always more useful than just saying the truth.

I've had some success with this on university discord servers, there are people on there who appear less woke than I appear, but I seem to get more people interacting with me, and therefore I get more woke people hanging themselves with the rope I give them. I think that any forum with real names requires this sort of caution if we're gonna talk politics, anything less than this might carry an unacceptable chance of a bad outcome.

Why do you think that? This combination of features would be selected against in evolutionary terms, so it's not like we evidence from either evolution or humans attempting to make such a virus, and failing at it. As far as I can see no optimization process has ever attempted to make such a virus.

Yeah the pulsing patterns seem very specific, and are probably the entire technical moat of the company. The way it works is that there are a variety of "programs" on the app, so you have a "stress program" that lasts 15 min, which starts with short, intense pulses that get quicker and quicker, then you have a "calm program", a "sleep program", etc. The device modulates both the intensity and the frequency of the haptics over time depending on the program you chose.

Short answer: it depends on how much cardio you're doing already. Cardio contributes to systemic fatigue, and too much of it will reduce your NEAT (Non-Exercise-Activity-Thermogenesis), basically you'll fidget less and be less inclined to take the stairs instead of the elevator, which will have a net-negative effect on caloric balance.

Very relevant video: Does More Cardio Equal More Weight Lost?

(and I'm very surprised by your 100cal per 2 miles walking number, I use this calculator for estimating walking calories, which gives me much higher numbers)

Pinging @JhanicManifold.

Lol, I guess I have been writing a lot about how to get various psychedelics, peptides, and semaglutide.

I should really make a giant "Nerd's Guide to Getting Ripped & Shredded" as a post that we could always link for these questions, rehashing everything every wednesday seems like a bit of a waste of time.

No, literally just free physical letters to everyone who requests it, like all banks do. Twitter should have more than enough money to do this for all US users.

Doesn't Hamas put bases under hospitals specifically because of this? The two options are either to never bomb hospitals and hence to accept Hamas as the leader in-perpetuity of the region, or to give every available warning to the population to evacuate and then bomb the terrorist base...

My god, can you imagine the drama inside that tiny ship over the past days? I think I'd bet at 90% that the CEO is already long dead, killed by the 4 others in order to save oxygen. Two of the people are a father-son duo, and in a power struggle they might have killed the others too, knowing that they can only trust family. I really hope they find that thing so we get to know what actually happened.

If you took a 200 IQ big-brain genius, cut off his arms and legs, blinded him, and then tossed him in a piranha tank I don't think he would MacGyver his way out.

I fully agree for a 200 IQ AI, I think AI safety people in general underestimate the difficulty that being boxed imposes on you, especially if the supervisors of the box have complete read access and reset-access to your brain. However, if instead of the 200 IQ genius, you get something like a full civilization made of Von Neumann geniuses, thinking at 1000x human-speed (like GPT does) trapped in the box, would you be so sure in that case? While the 200 IQ genius is not smart enough to directly kill humanity or escape a strong box, it is certainly smart enough to deceive its operators about its true intentions and potentially make plans to improve itself.

But discussions of box-evasion have become kind of redundant, since none of the big players seem to have hesitated even a little bit to directly connect GPT to the internet...

that firing white people to bring in a minority is still racism, but still believes in critical race theory and think that Jonathan Haidt and John McWhorter are idiots that are worth mocking immediately.

I think that this is a mostly empty category at this point.

What are some diplomatic skills I can develop to argue in such a way that a) I do not put myself in a position where I get easily accused (with onlookers being persuaded of those accusations), b) attention of onlookers remains on the bailey (not motte, the accusations), c) I politely reveal the idiocy of the positions of my woke interlocutors?

You have to hide your arguments with the language style and concerns of the woke, absolutely never show any type of anger, show complete apparent sincerity and good-will, and never actually admit to holding some positions like being pro-gun or anti-abortion, but merely say that you have some close family members (or friends) who hold those positions, and that after talking with them you didn't think their arguments were as stupid as you expected (you were not convinced by them, of course, but merely impressed by the arguments). Couch everything in a desire to simply make the woke's arguments better by seeing how they stand up to scrutiny. At the end of every exchange where you "try to make the woke arguments better", you have to end up saying that the woke are right, and just hope that the spectators see that the arguments against are really much better than those for the woke, even if you appeared to concede to wokeness in the end.

Another argumentative weapon is to have the "how do we convince conservatives?" frame, where you show people the anti-woke arguments in an attempt to see how the woke POV would address them, for the purpose of convincing conservatives. Couch everything in compassion, conservatives are people too, despite their "horrible viewpoints", and we should aim to rebuke their arguments. The goal is not to be explicitly anti-woke, but just to expose people to anti-woke arguments which they've never seen.

All that said... arguing about politics in public under your real name is almost never worth it.

I think I share a common preference among men that Iā€™d rarely pass up on a hookup with an attractive woman but would probably not date a woman long-term who has slept around too much

I'd pass on even a hookup with an attractive woman who has had too many partners. Some character traits or behaviours lower a woman's attractiveness so much that she just drops below a critical level for me. For instance, if I see a woman being cruel to a child, she could look like Emily Ratajkowski, and I still wouldn't want to fuck her (or maybe at that point it wraps back around to hate-fucking, I'm not sure)

But yes, I think that casual sex unethical, because "casual sex" is for men what "friendzoned orbiters" is for women. In both cases only one party gets most of what they want: sex for men, emotional intimacy for women. In most real cases of friendzoned guys and girls having casual sex, no one is making it clear that the relationship has no chance of going further, both these situations are fundamentally consequences of power imbalances.

"You will not be punished for your sins, you will be punished by them"

Conversely, a good deed is its own reward, and a good conscience can really bring a lot of pleasure intrinsic to it.

Do you think the COVID vaccine will literally take 5 years off most people's lives? There have been semaglutide studies going up to like 24 months without adverse effects for weight loss, and weaker stuff in the same GLP-1 class like liraglutide has been used for years. We might find negative effects later on, but In general, stuff that doesn't have massive negative effects in the medium term won't suddenly get massive negative effects in the long term.

And regardless of this, if any negative effects happen in 30 years, I fully expect future AI medicine to make them completely trivial.

Semaglutide works really, really fucking well.

Here's gpt-4's answer, which isn't bad all things considered, not especially out-of-the-box necessarily, but it seems fairly competent to me. Though of course the implementation details are where the real problem lies.

/images/16846790675818799.webp

Okay, but the Terminators themselves look silly. Why would a superintelligent AI build robot skeletons when it could just build drones to kill everyone?

Nah, a superintelligence would more probably build a virus (or multiple different ones, to make sure really no one survives) with a built-in clock, so that everyone gets infected without showing any symptoms, then suddenly everyone dies in the same day and no valuable infrastructure is destroyed. The fact that humans became aware of Skynet in the first place is the most unrealistic thing to me, surprise is the biggest advantage against an intelligent adversary, and a superintelligence who is carrying out a human-extinction plan would never reveal itself at all, and especially not in such a visible way as literal walking robots. In the real world we would all die without having any idea what happened.

If a good friend asked this of me in an apologetic way, emphasizing that they wouldn't ask if it wasn't important, sure, I'd call them whatever they want.