@JhanicManifold's banner p

JhanicManifold


				

				

				
6 followers   follows 0 users  
joined 2022 September 04 20:29:00 UTC

				

User ID: 135

JhanicManifold


				
				
				

				
6 followers   follows 0 users   joined 2022 September 04 20:29:00 UTC

					

No bio...


					

User ID: 135

Sooo, Big Yud appeared on Lex Fridman for 3 hours, a few scattered thoughts:

Jesus Christ his mannerisms are weird. His face scrunches up and he shows all his teeth whenever he seems to be thinking especially hard about anything, I didn't remember him being this way in the public talks he gave a decade ago, so this must either only be happening in conversations, or something changed. He wasn't like this on the bankless podcast he did a while ago. It also became clear to me that Eliezer cannot become the public face of AI safety, his entire image, from the fedora, to the cheap shirt, facial expressions and flabby small arms oozes "I'm a crank" energy, even if I mostly agree with his arguments.

Eliezer also appears to very sincerely believe that we're all completely screwed beyond any chance of repair and all of humanity will die within 5 or 10 years. GPT4 was a much bigger jump in performance from GPT3 than he expected, and in fact he thought that the GPT series would saturate to a level lower than GPT4's current performance, so he doesn't trust his own model of how Deep Learning capabilities will evolve. He sees GPT4 as the beginning of the final stretch: AGI and SAI are in sight and will be achieved soon... followed by everyone dying. (in an incredible twist of fate, him being right would make Kurzweil's 2029 prediction for AGI almost bang on)

He gets emotional about what to tell the children, about physicists wasting their lives working on string theory, and I can see real desperation in his voice when he talks about what he thinks is really needed to get out of this (global cooperation about banning all GPU farms and large LLM training runs indefinitely, on the level of even stricter nuclear treaties). Whatever you might say about him, he's either fully sincere about everything or has acting ability that stretches the imagination.

Lex is also a fucking moron throughout the whole conversation, he can barely even interact with Yud's thought experiments of imagining yourself being someone trapped in a box, trying to exert control over the world outside yourself, and he brings up essentially worthless viewpoints throughout the whole discussion. You can see Eliezer trying to diplomatically offer suggested discussion routes, but Lex just doesn't know enough about the topic to provide any intelligent pushback or guide the audience through the actual AI safety arguments.

Eliezer also makes an interesting observation/prediction about when we'll finally decide that AIs are real people worthy of moral considerations: that point is when we'll be able to pair midjourney-like photorealistic video generation of attractive young women with chatGPT-like outputs and voice synthesis. At that point he predicts that millions of men will insist that their waifus are actual real people. I'm inclined to believe him, and I think we're only about a year or at most two away from this actually being a reality. So: AGI in 12 months. Hang on to your chairs people, the rocket engines of humanity are starting up, and the destination is unknown.

So, I went to see Barbie despite knowing that I would hate it, my mom really wanted to go see it and she feels weird going to the theatre alone, so I went with her. I did, in fact, hate it. It's a film full of politics and eyeroll moments, Ben Shapiro's review of it is essentially right. Yet, I did get something out of it, it showed me the difference between the archetypal story that appeals to males and the female equivalent, and how much just hitting that archetypal story is enough to make a movie enjoyable for either men or women.

The plot of the basic male story is "Man is weak. Man works hard with clear goal. Man becomes strong". I think men feel this basic archetypal story much more strongly than women, so that even an otherwise horrible story can be entertaining if it hits that particular chord well enough, if the man is weak enough at the beginning, or the work especially hard. I'm not exactly clear what the equivalent story is for women, but it's something like "Woman thinks she's not good enough, but she needs to realise that she is already perfect". And the Barbie movie really hits on that note, which is why I think women (including my mom) seemed to enjoy it.

You can really see the mutual blindness men and women have with respect to each other in this domain. Throughout the movie, Ken is basically subservient to Barbie, defining himself only in the relation to her, and the big emotional payoff at the end is supposed to be that Ken "finds himself", saying "I am Ken!". But this whole "finding yourself" business is a fundamentally feminine instinct, the male instinct is to decide who you want to be and then work hard towards that, building yourself up. The movie's female authors and director are completely blind to this difference, and essentially write every character with female motivations.

I'm continuing to lose weight from semaglutide (down 25lbs so far in about 3 months), these past few weeks at a rate of 2lbs/week. I'm also working out 6 times a week doing high-volume bodybuilding style training in order to preserve every shred of muscle I've built over the past 10 years of intermittently working out, and of course eating very high amounts of protein.

I'm still roughly 22 or 23 percent body fat, so not shredded by any means, but beneath the fat I have about 165lbs of lean body mass at a height of 5'9.5, and the large body frame that caused me so much anguish as a teenager is starting to play in my favour because it turns out that my shoulders are wide as fuck (21inches across from shoulder to shoulder measured on a wall, and 53inch shoulders circumference, and it turns out that girls like wide shoulders the way guys like tits?) ... so the overall figure is starting to come together, and the face has slimmed down too. Overall I look ok and muscular in clothes, but kind of unimpressive naked.

I have noticed... changes... to the way I'm perceived socially. Lots of furtive glances when I pass by (and some direct staring), lots of girls staring at my chest when I talk to them, a lot more inexplicable hair-playing and lip-licking, groups of high-school girls giggling when I pass by (which caused me a fucking spike of anxiety when it first happened, high-school-girl-giggling was not associated with anything good the last time it happened to me). I notice that people seemingly want to integrate me into conversations significantly more than before, I've noticed a subtle shift in energy when there's a casual group discussion.

It's also kind of fun to see new people I meet kind of be perplexed after talking to me for the first time. Bear in mind that my fundamental personality is that of a physics nerd (though now I do machine learning), that was the archetype that crystallised inside me during my adolescence, and getting muscles and a bit leaner has done nothing to that aspect of me. But this means that people kind of get visibly perplexed when I ask good questions during ML poster sessions, and when I don't fit their idea of a dumb muscle-bound jock. So far this has mostly amused me, we'll see how It'll get as I get even leaner.

As I get leaner the changes accelerate, every 5lbs decrease has produced more changes of this sort than the last. Overall this has been a strangely emotional experience, I'm basically in the process of fulfilling the dream of my 14-year-old self, and I don't really see any obstacle that could prevent me from getting to 12% body fat in a few more months.

I'll write a much longer top-level post with pictures and everything once this is all over.

Doing tren just for the hell of it would be profoundly stupid, it would shut off your own test production, make you (even more?) depressed, possibly turn you gay, irritable, frustrated for no reasons whatsoever, possibly give you life-altering acne, hair loss, increased fluid retention in the face, and then of course there is the systemic organ damage that it would cause. Literally the only positive effect would be that you'd have increased muscle growth, but from what I've gathered of your comments you haven't exactly optimised protein intake, sleep and workouts, so you have plenty of low-hanging gains to be had.

I have to say, it's incredible how well semaglutide is working for me. Literally the only effect I notice is a massive decrease in general hunger and a massive increase in how full I feel after every meal, with no side-effects that I can notice. No more desire to go buy chocolate bars each time I pass by a convenience store. No more finishing a 12 incher from subway and still looking for stuff to eat. No more going to sleep hungry. The other day at subway I finished half of my sandwich and was absolutely amazed to find out that I didn't especially want to eat the second half. To be clear, I still get hungry, it's just that my hunger levels now automatically lead to me eating 2000 calories per day, instead of my old 3500.

I'm simultaneously amazed that I finally found the solution that I've been looking for, and angry at the prevalent "willpower hypothesis of weight loss" that I've been exposed to my whole life. I spent a decade trying to diet with difficulty set on nightmare mode, and now that my hunger signalling seems to have been reset to normal levels, I realise just how trivial it is to be skinny for people with normal hunger levels. All the people who teased me in high school didn't somehow have more willpower than me, they were fucking playing on easy mode!

From our own point of view it's clear that SBF is grey tribe, so we've been focusing on the Effective Altruism angle, but I don't think the mainstream knows of the grey tribe yet, and if the blue tribe has recognised him as one of their own (with him being a democratic donor and all), then it makes more sense that the media would be defending him.

There are a few ways that GPT-6 or 7 could end humanity, the easiest of which is by massively accelerating progress in more agentic forms of AI like Reinforcement Learning, which has the "King Midas" problem of value alignment. See this comment of mine for a semi-technical argument for why a very powerful AI based on "agentic" methods would be incredibly dangerous.

Of course the actual mechanism for killing all humanity is probably like a super-virus with an incredibly long incubation period, high infectivity and high death rate. You can produce such a virus with literally only an internet connection by sending the proper DNA sequence to a Protein Synthesis lab, then having it shipped to some guy you pay/manipulate on the darknet and have him mix the powders he receives in the mail in some water, kickstarting the whole epidemic, or pretend to be an attractive woman (with deepfakes and voice synthesis) and just have that done for free.

GPT-6 itself might be very dangerous on its own, given that we don't actually know what goals are instantiated inside the agent. It's trained to predict the next word in the same way that humans are "trained" by evolution to replicate their genes, the end result of which is that we care about sex and our kids, but we don't actually literally care about maximally replicating our genes, otherwise sperm banks would be a lot more popular. The worry is that GPT-6 will not actually have the exact goal of predicting the next word, but like a funhouse-mirror version of that, which might be very dangerous if it gets to very high capability.

I'm not sure what you mean by that, does Emily Ratajkowski's SMV really depend on her parents and social status? I guess maybe I'd find her a bit less attractive if I knew she had a deep Appalachian accent or something, but I truly don't give a single fuck about her social status, she could be an outcast with no friends for all I care, and it wouldn't matter a bit.

My god, can you imagine the drama inside that tiny ship over the past days? I think I'd bet at 90% that the CEO is already long dead, killed by the 4 others in order to save oxygen. Two of the people are a father-son duo, and in a power struggle they might have killed the others too, knowing that they can only trust family. I really hope they find that thing so we get to know what actually happened.

If you took a 200 IQ big-brain genius, cut off his arms and legs, blinded him, and then tossed him in a piranha tank I don't think he would MacGyver his way out.

I fully agree for a 200 IQ AI, I think AI safety people in general underestimate the difficulty that being boxed imposes on you, especially if the supervisors of the box have complete read access and reset-access to your brain. However, if instead of the 200 IQ genius, you get something like a full civilization made of Von Neumann geniuses, thinking at 1000x human-speed (like GPT does) trapped in the box, would you be so sure in that case? While the 200 IQ genius is not smart enough to directly kill humanity or escape a strong box, it is certainly smart enough to deceive its operators about its true intentions and potentially make plans to improve itself.

But discussions of box-evasion have become kind of redundant, since none of the big players seem to have hesitated even a little bit to directly connect GPT to the internet...

I think it will depend mainly on how the issues of "AI racism" and "AI profits going to top 1%" end up playing out. The left is the party of regulation, and there is plenty that they'd like to regulate here. Generally the left's stance towards things they want to regulate is not especially friendly.

It's not clear to me either, and it wouldn't be clear to the occupants too, but life and death situations don't tend to make you more reasonable and level-headed, killing the CEO is the "we must do something, and this is something" option here.

Ross comes close to understanding what the real risks are in his top-right "unforeseen consequences" node, but then he somehow links that with free will and consciousness, which is just a moronic misunderstanding of the AI-risk position. Unfortunately he doesn't seem to have found a convincing argument against AI doom.

Has anyone else tried Github Copilot, and found it to have really insidious downsides? I noticed the other day that copilot really fucks up my mental flow when building a program, it's like it prevents me from building a complete mental map of the global logic of what I'm doing. It's not that the code copilot outputs is bad exactly, it's that writing the code myself seems to make me understand my own program much better than just reading and correcting already-written code. And overall this "understanding my code" effect makes me code much faster, so I'm not even sure that copilot truly provides that large of a speed benefit. I also notice my mind subtly "optimizing for the prompt" instead of just writing the code myself, like some part of my mind is spending its resources figuring out what to write to get copilot to produce what I want, instead of just writing what I want.

Maybe copilot is a lifesaver for people who aren't programming particularly complex programs, and I do think it's useful for quickly making stuff that I don't care about understanding. But if I'm writing a new Reinforcement Learning algorithm for a research paper, there is no way that I'd use it.

Wow, unless you have a weird definition of "drink", then those are truly massive amounts of alcohol. Like Huberman would say, this is called "alcohol use disorder".

My weird unorthodox opinion: I think a large-dose 5-meo-dmt trip should be mandatory right before assisted suicide. That drug is basically subjective death in molecular form, and at the right dose it brings you right up there to the stratosphere of sublime meditative states, I personally know of 2 people who were completely cured of suicide ideation from one dose of that stuff. Let them experience Death before death, and see if they want to live after that.

(Warning: not for the faint of heart, ptsd possible for the unprepared and you could choke on your own vomit at extreme doses, this is a last resort in case you really want to die today)

I have to say that I really, really want all this UFO stuff to be true, mostly because this implies that there's an "adult in the neighborhood" who won't let a super-intelligence be created, it would imply that we'd have to share the cosmic endowment with aliens, but I'll take the certainty of getting a thousand bucks over the impossibility a billion.

However, If the US has had alien technology for decades and kept it a secret, this implies that the US has essentially sacrificed unbelievable amounts of economic and technological growth for the sake of... what, exactly? Preventing itself from having asymmetric warfare capabilities?! Isn't asymmetric warfare the entire goal of the US military? The rationale for maintaining this unbelievable level of secrecy for 8 decades, through democrat and republican presidents, through wars and economic crises, just doesn't seem that strong to me.

So therefore, barring actual physical evidence, it seems that the US intelligence apparatus is trying to make us believe that alien tech exists, and I have no clue why. This is obviously a fairly complicated operation given all the high-level people who keep coming forward, but I can't see what is to be gained here. So overall, my impression at the whole UFO phenomenon is massive confusion, I can't come up with a single coherent model of the world which makes sense of everything I'm seeing.

I've got to look like I could compete in physique bodybuilding competitions, be impeccably dressed, and be extremely kind AND make...hmm...maybe a million a year?

Bro, that's not top 10%... that's top 1 in 10^4 or 10^5, how many kind millionaire bodybuilders do you see walking around in daily life? top-10-percent isn't that hard to do...

Banning DEI stuff would seem easily positive to me, but banning tenure altogether is just insane, it would just make Texas incredibly less competitive as a place for promising young researchers.

could you please try to explain yourself in one or two succinct paragraphs instead of in giant essays or multi-hour long podcasts?

That's a fair point, here are the load-bearing pieces of the technical argument from beginning to end as I understand them:

  1. Consistent Agents are Utilitarian: If you have an agent taking actions in the world and having preferences about the future states of the world, that agent must be utilitarian, in the sense that there must exist a function V(s) that takes in possible world-states s and spits out a scalar, and the agent's behaviour can be modelled as maximising the expected future value of V(s). If there is no such function V(s), then our agent is not consistent, and there are cycles we can find in its preference ordering, so it prefers state A to B, B to C, and C to A, which is a pretty stupid thing for an agent to do.

  2. Orthogonality Thesis: This is the statement that the ability of an agent to achieve goals in the world is largely separate from the actual goals it has. There is no logical contradiction in having an extremely capable agent with a goal we might find stupid, like making paperclips. The agent doesn't suddenly "realise its goal is stupid" as it gets smarter. This is Hume's "is vs ought" distinction, the "ought" are the agent's value function, and the "is" is its ability to model the world and plan ahead.

  3. Instrumental Convergence: There are subgoals that arise in an agent for a large swath of possible value functions. Things like self-preservation (E[V(s)] will not be maximised if the agent is not there anymore), power-seeking (having power is pretty useful for any goal), intelligence augmentation, technological discovery, human deception (if it can predict that the humans will want to shut it down, the way to maximise E[V(s)] is to deceive us about its goals). So that no matter what goals the agent really has, we can predict that it will want power over humans, want to make itself smarter, and want to discover technology, and want to avoid being shut off.

  4. Specification Gaming of Human Goals: We could in principle make an agent with a V(s) that matches ours, but human goals are fragile and extremely difficult to specify, especially in python code, which is what needs to be done. If we tell the AI to care about making humans happy, it wires us to heroin drips or worse, if we tell it to make us smile, it puts electrodes in our cheeks. Human preferences are incredibly complex and unknown, we would have no idea what to actually tell the AI to optimise. This is the King Midas problem: the genie will give us what we say (in python code) we want, but we don't know what we actually want.

  5. Mesa-Optimizers Exist: But even if we did know how to specify what we want, right now no one actually knows how to put any specific goal at all inside any AI that exists. A Mesa-optimiser refers to an agent which is being optimised by an "outer-loop" with some objective function V, but the agent learns to optimise a separate function V'. The prototypical example is humans being optimised by evolution: evolution "cares" only about inclusive-genetic-fitness, but humans don't, given the choice to pay 2000$ to a lab to get a bucket-full of your DNA, you wouldn't do it, even if that is the optimal policy from the inclusive-genetic-fitness point of view. Nor do men stand in line at sperm banks, or ruthlessly optimise to maximise their number of offspring. So while something like GPT4 was optimised to predict the next word over the dataset of human internet text, we have no idea what goal was actually instantiated inside the agent, its probably some fun-house-mirror version of word-prediction, but not exactly that.

So to recap, the worry of Yudkowsky et. al. is that a future version of the GPT family of systems will become sufficiently smart and develop a mesa-optimiser inside of itself with goals unaligned with those of humanity. These goals will lead to it instrumentally wanting to deceive us, gain power over earth, and prevent itself from being shut off.

For regular consumption, creatine is the king, there's no other supplement with as clear and massive of a benefit, it makes you stronger, helps cognition and 30 years of intense research hasn't found a single negative effect (maybe apart from slight intestinal distress in some people).

I also use phenibut and modafinil on special occasions. Phenibut is amazing at lowering social anxiety in particular while leaving your reasoning capacities essentially untouched, and modafinil is good at boosting concentration and making you stay awake. You shouldn't take these daily, phenibut in particular will fuck up your life if you take large doses daily, the best is to use it for occasional job interviews or presentations, for which it works amazingly well.

What GPT does is predict the next token. That's a simple statement with a great deal of complexity underlying it.

At least, that's the Outer Objective, it's the equivalent of saying that humans are maximising inclusive-genetic-fitness, which is false if you look at the inner planning process of most humans. And just like evolution has endowed us with motivations and goals which get close enough at maximising its objective in the ancestral environment, so is GPT-4 endowed with unknown goals and cognition which are pretty good at maximising the log probability it assigns to the next word, but not perfect.

GPT-4 is almost certainly not doing reasoning like "What is the most likely next word among the documents on the internet pre-2021 that the filtering process of the OpenAI team would have included in my dataset?", it probably has a bunch of heuristic "goals" that get close enough to maximising the objective, just like humans have heuristic goals like sex, power, social status that get close enough for the ancestral environment, but no explicit planning for lots of kids, and certainly no explicit planning for paying protein-synthesis labs to produce their DNA by the buckets.

Okay... Today on the subway a ridiculously attractive girl literally started blushing when our eyes met, like, her cheeks and nose became very visibly red, and she wasn't wearing blush make-up, this also happened a few times over the past few weeks. Is that a muscles-dependent effect, or a "you're handsome" effect?

If you're in canada, you can just order shrooms online, though you'll have to pay by interac transfer bank-to-bank. They will arrive, and they will be real and effective. If you want to precisely dose shrooms, blend them into very small bits, then use a milligram scale to measure and make a shroom tea. That's how I do it anyway. For other psychedelics, another good source is https://my.indiamart.com/, though if you search directly on the website you won't find them, you need to go on google, and do a specific site search like "site: indiamart.com ketamine", and you'll find this close to the top links, which is just straight up a vial of ketamine. I've bought lots of stuff from indiamark, including blood pressure medications, a long-term supply of antibiotics for my disaster prep bag, semaglutide, modafinil, etc. and they've all made it past the canadian border safe and sound, and were completely effective as far as I can see, though I didn't test them in a lab.

To minimize legal risk you can also get stuff like 1P-LSD, which is different enough from LSD as to be in a legal gray area, but still a pretty potent psychedelic.

I have to say that doing a heavy set of 450 lbs deadlifts for 5 reps with heavy metal blasting through my headphones is really pleasant to me, the burning muscles and the exhaustion are obviously not pleasant in themselves, but lifting heavy shit sort of makes me aggressive in a way that's pleasant. The feeling of having pumped muscles, like my skin is about to explode around my biceps, is also quite pleasant.