@JhanicManifold's banner p

JhanicManifold


				

				

				
6 followers   follows 0 users  
joined 2022 September 04 20:29:00 UTC

				

User ID: 135

JhanicManifold


				
				
				

				
6 followers   follows 0 users   joined 2022 September 04 20:29:00 UTC

					

No bio...


					

User ID: 135

In the drug realm:

Biggest by far: semaglutide for weight loss. Works so well that Walmart is noticing sales drop... if that isn't an amazing endorsement I don't know what is.

Second biggest: occasional moderate dose (1.5g) phenibut taken 8 hours before a stressful social situation.

I've had the same "I can do it myself" mentality for years, and I did have intermittent successes before starting semaglutide. I can stick to a diet perfectly for roughly a month at a time and lose 10 lbs, the problem always comes when life gets stressful and suddenly my mental energy assigned to the diet starts to decline, if It's crunch time and I have an important presentation tomorrow, I can't also be really fucking hungry because I'm in a 1000cal/day deficit, I'll just throw the diet out the window for the stressful time period.

Semaglutide takes care of all that, and I don't need to have zero stress in order for me to stick to the diet, that now happens more or less effortlessly. I still need to have enough mental space to prep my diet foods at regular intervals so I don't eat out instead of eating my home-cooked stuff, but that's a much lower bar than tolerating hunger.

I think I share a common preference among men that Iā€™d rarely pass up on a hookup with an attractive woman but would probably not date a woman long-term who has slept around too much

I'd pass on even a hookup with an attractive woman who has had too many partners. Some character traits or behaviours lower a woman's attractiveness so much that she just drops below a critical level for me. For instance, if I see a woman being cruel to a child, she could look like Emily Ratajkowski, and I still wouldn't want to fuck her (or maybe at that point it wraps back around to hate-fucking, I'm not sure)

But yes, I think that casual sex unethical, because "casual sex" is for men what "friendzoned orbiters" is for women. In both cases only one party gets most of what they want: sex for men, emotional intimacy for women. In most real cases of friendzoned guys and girls having casual sex, no one is making it clear that the relationship has no chance of going further, both these situations are fundamentally consequences of power imbalances.

on this site, in the top right you see a "contact supplier" section, and in the top right of that, there's a little button with a "business card" hover-on text, if you click on that and enter you email, it'll send you a whatsapp contact number, and then you can ask for semaglutide, payment is a bit of a hassle, and is done through wise, or moneygram, or westernunion (or some other international money transfer site). I can confirm that you do indeed receive some powder which has the effects I'd expect of semaglutide when injected.

/images/168936318711942.webp

/images/16893631876250272.webp

This probably has to do with sleep quality, the 4 main things that I've noticed make a noticeable difference for me are

  1. stopping caffeine

  2. magnesium supplements before sleep

  3. Some form of bed cooling system (I use the bedjet 3). If you're hot or sweating or cold while you sleep, this will make a massive difference

  4. A weird vibrating ankle bracelet called the Apollo Neuro that works kind of by magic (see this)

It's a close cousin to benzodiazepines (though much easier to acquire), so the withdrawal symptoms are massive as fuck, there's a reddit community dedicated to people who've fucked up their lives taking phenibut everyday, though I can't seem to find it right now. I also notice increased anxiety on the day after I take a dose. It works very well for my use case, but I periodically remember not to treat it lightly.

For regular consumption, creatine is the king, there's no other supplement with as clear and massive of a benefit, it makes you stronger, helps cognition and 30 years of intense research hasn't found a single negative effect (maybe apart from slight intestinal distress in some people).

I also use phenibut and modafinil on special occasions. Phenibut is amazing at lowering social anxiety in particular while leaving your reasoning capacities essentially untouched, and modafinil is good at boosting concentration and making you stay awake. You shouldn't take these daily, phenibut in particular will fuck up your life if you take large doses daily, the best is to use it for occasional job interviews or presentations, for which it works amazingly well.

For machine learning in particular and scientific computing more generally, you have the following extremely useful libraries, all in python, because that's the most common language here:

  1. Numpy, short for Numerical Python. This is a very deep library that does everything from numerical derivatives, integrals, matrix multiplication, everything in linear algebra, sorting arrays of numbers, and even simple linear regression. The main workhorse here is the "ndarray" datatype that numpy defines, which allows you to create an object which stores a multi-dimensional array of numbers very efficiently.

  2. Scipy, short for Scientific Python. This is an extension of numpy, which includes optimisation routines, solving differential equations, algebraic equations, etc. Less overwhelmingly used than numpy, but still very common

  3. Scikit-learn. This is the library to use if you want off-the-shelf classical machine learning algorithms, so anything outside of deep-learning stuff. Decision trees, linear/logistic regression, clustering, nearest neighbors, or whatever, this does basically all of it.

  4. matplotlib. This is the most common visualisation library to make graphs or charts. Endlessly customizable, and hence kind of a pain to use, but it's the most common and very useful.

  5. Pytorch. Now we're getting into deep learning and GPU computing. Pytorch essentially does much of the same job as Numpy, but it also automatically interfaces with your GPU, so that all your matrix multiplies are run much, much faster. This is the library you use to define your deep learning models, and the one you use to write your training code.

And so on and so on. There are other libraries like Pandas for data analysis, and all the huggingface libraries for deep learning, which get you even more abstraction, so that you can use transformers without even knowing. I don't think there is any more pleasant way of getting to know these libraries than reading a few textbooks and then inevitably drudging through their documentations when the need arises.

"You will not be punished for your sins, you will be punished by them"

Conversely, a good deed is its own reward, and a good conscience can really bring a lot of pleasure intrinsic to it.

Do you think the COVID vaccine will literally take 5 years off most people's lives? There have been semaglutide studies going up to like 24 months without adverse effects for weight loss, and weaker stuff in the same GLP-1 class like liraglutide has been used for years. We might find negative effects later on, but In general, stuff that doesn't have massive negative effects in the medium term won't suddenly get massive negative effects in the long term.

And regardless of this, if any negative effects happen in 30 years, I fully expect future AI medicine to make them completely trivial.

Semaglutide works really, really fucking well.

This part never usually pans out lol.

Oh it did for me, I still remember her reaction when she saw me for the first time in like 6 months: * looks at me, does a double take, eyes widen, face becomes fully red, furtive looks the whole evening *. I actually feel a little bit guilty about just how good revenge feels.

Hmm, I would say that if the secret is like "AI will kill everyone and there's nothing you can do to stop it", don't tell her. If the secret is like "your father was a murderer" or "you have terminal cancer", then do tell her, because it's "her business" in some sense. Another factor is how much knowing the secret will eat at you over time, if the person is a close friend of yours, keeping this secret forever will be a great burden and you should tell them, if it's just an acquaintance, then not so much.

If you think you're good at acting and deception, you could even indirectly ask for their opinion on the matter, all you have to do is invent a new secret with all the relevant characteristic about some distant friend, then ask them whether you should tell your distant friend.

Evolution is not an algorithm at all. It's the term we use to refer to the cumulative track record of survivor bias in populations of semi-deterministic replicators.

This is just semantics, but I disagree with this, if you have a dynamical system that you're observing with a one-dimensional state x_t, and a state transition rule x_{t+1} = x_t - 0.1 * (2x_t) , you can either just look at the given dynamics and see no explicit optimisation being done at all, or you can notice that this system is equivalent to gradient descent with lr=0.1 on the function f(x)=x^2 . You might say that "GD is just a reification of the dynamics observed in the system", but the two ways of looking at the system are completely equivalent.

a transformer is wholly shaped by the pressure of the objective function, in a way that a flexible intelligent agent generated by an evolutionary algorithm is not shaped by IGF (to say nothing of real biological entities). The correct analogies are something like SGD:lifetime animal learning; and evolution:R&D in ML

Okay, point 2 did change my mind a lot, I'm not too sure how I missed that the first time. I still think there might be a possibly-tiny difference between outer-objective and inner-objective for LLMs, but the magnitude of that difference won't be anywhere close to the difference between human goals and IGF. If anything, it's really remarkable that evolution managed to imbue some humans with desires this close to explicitly maximising IGF, and if IGF was being optimised with GD over the individual synapses of a human, of course we'd have explicit goals for IGF.

Hmm, basically all the libraries I listed except maybe for pytorch haven't changed all that much since 2021, gpt-4 should really still be very useful with all of them. What it will have trouble with is a library like "Transformer" by huggingface, which lets you automatically download and use pretrained deep learning models. But to even use a super-high-abstraction library like that one you still need a bunch of "glue skills" like knowing how to load a .png image from your computer into a format that the high-level functions can understand, and how to interpret and visualise the output of those high-level functions. GPT-4 would be amazing for all of that.

Ben Shapiro also makes the point that they drop "knock bombs" before the real bombs, the only purpose of those bombs is to shake the building to tell civillians to evacuate.

Bob Lazar is a lying hack, but that particular point of his is true, it's just that in that case, there's no downside to revealing the secret. Other countries won't do much better at deciphering the hidden tech, so we might as well use the US's dominance in science to make as much progress as possible with this.

What GPT does is predict the next token. That's a simple statement with a great deal of complexity underlying it.

At least, that's the Outer Objective, it's the equivalent of saying that humans are maximising inclusive-genetic-fitness, which is false if you look at the inner planning process of most humans. And just like evolution has endowed us with motivations and goals which get close enough at maximising its objective in the ancestral environment, so is GPT-4 endowed with unknown goals and cognition which are pretty good at maximising the log probability it assigns to the next word, but not perfect.

GPT-4 is almost certainly not doing reasoning like "What is the most likely next word among the documents on the internet pre-2021 that the filtering process of the OpenAI team would have included in my dataset?", it probably has a bunch of heuristic "goals" that get close enough to maximising the objective, just like humans have heuristic goals like sex, power, social status that get close enough for the ancestral environment, but no explicit planning for lots of kids, and certainly no explicit planning for paying protein-synthesis labs to produce their DNA by the buckets.

Buying it on indiamart.com from India pharmacies, I haven't really had problems with importing it in Canada

yeah I just use insulin needles from amazon, with alcohol prep pads. The only thing you can't get from amazon is bacteriostatic water, which is still over-the-counter, just not from amazon (I got it from here). Peptides in general are best stored dry in a freezer or fridge, but I keep them at room temperature away from the light, the tests I've seen don't really show any meaningful degradation in a few months. Though they start to degrade much faster once you add in water.

Another thing, you probably won't be able to just buy a month's supply, the minimum order quantity is 10 vials of 2mg, which comes in a prepackaged little box disguised as a chinese beauty mask. I don't think the supplier is set up to ship orders which aren't multiples of 10 vials.

Surprisingly, it kind of does! It felt to me like it helped me not think of work or other things while I'm going to sleep, the vibrations on your skin have a way of capturing attention very effectively. I tried it out for a few weeks after seeing it recommended here, but I'm now returning it, the difference just isn't that big for me, nowhere near the magnitude that the bedjet is making.

Yeah I wouldn't really trust the tests on that page too much, I took their IQ test and got 158 on it, which a ridiculous overestimation based on the previous tests I took, where I got like 135-140.

I'd be super happy to be convinced of the contrary! (Given that the existence of mesa-optimisers are a big reason for my fears of existential risk) But do you mean to imply that gpt-4 is explicitly optimising for next-word prediction internally? And what about a gpt-4 variant that was only trained for 20% of the time that the real gpt-4 was? To the degree that LLMs have anything like "internal goals", they should change over the course of training, and no LLM is trained anywhere close to completion, so I find it hard to believe that the outer objective is being faithfully transfered.

I mean, if you want large costs for the same benefits, there are plenty of effective weight loss drugs with a shit ton of unhealthy side effects, DNP and trenbolone will make you lose weight, they just might also kill you lol, their side effects are not subtle at all. Free lunches are rare in the world, but there's certainly lunches that are more expensive than others.

Get a meat thermometer and grill to 170F internal temperature. I mix 4 eggs, 2 cups panko bread crumbs, 1.3kg of extra lean beef, 3 tbsp smoked paprika, 2 tbsp garlic salt and 2 tbsp crushed oregano together into the burger paste, and they turn out great every time if grilled at around 450F. Spices don't burn if you mix them into the meat itself.

I might have fucked up one of the easier ones, but gotten avulse correctly. That would explain things if difficult questions are worth more.