@lagrangian's banner p

lagrangian


				

				

				
0 followers   follows 1 user  
joined 2023 March 17 01:43:40 UTC
Verified Email

				

User ID: 2268

lagrangian


				
				
				

				
0 followers   follows 1 user   joined 2023 March 17 01:43:40 UTC

					

No bio...


					

User ID: 2268

Verified Email

any other fair ideas

Fine companies a multiple of the wages paid to illegals. This pays for itself. Do it aggressively. Illegals will not want to be here if there is not work for them to do, so they will self deport.

That is, as always: align incentives. Don't try to make people do what you want. Make people want what you want.

Absolute raging bullshit. The chances that the most major breakthrough in physics debateably ever happened without any indication on Arxiv etc is tiny. The chance that furthermore the engineering work to scale up the discovery happened without it being leaked massively is zero.

I am embarrassed that the responses here so far are giving this any credence at all. It's always nice to be reminded of how wrong I must be about areas I know little about given that when people comment on my areas, they're often confidently hilariously wrong.

Overall I have a very high opinion of Aella's integrity and have no reason to believe she's intentionally duplicitous,

I feel oppositely.

Her entire brand is attention seeking behavior through discussion (and sale) of sex. She got attention and advertising by redefining a thing in a way that you found interesting enough to podcast and effort post about.

This is what happens when you have a constitutional right that a sufficient number of states simply choose not to recognize as such; look at how many southern states kept passing more and more onerous abortion restrictions to get around Roe

This comparison irritates and mystifies me.

The right to bear arms is quite directly in 2A:

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed

But the right to abortion is...nowhere. It's inferred from the right to privacy, which is inferred from due process (5/14A):

No person shall ... be deprived of life, liberty, or property, without due process of law

I'll grant that there's some legal history and subtlety around what counts as an "Arm," but that's a much smaller inferential distance than the above.

Why would "abortion, but only up to a certain point in the growth" be part of...I guess "liberty"? But, "drug legalization" somehow isn't?

In classic Mottian fashion, I'm a high decoupler in general, and on this - I'm personally anti-gun and pro abortion. But, that doesn't change that the legal footing of them is exactly opposite in strength: my desires are not constitutionally protected.

Needs more paragraph breaks.

96% to win, 92% to win popular vote. How is that strong a conclusion possible after all the last minute surprises last year? There are huge amounts of uncounted votes in swing states.

Context: I strongly do not think there was a material amount of fraud last time around, and I don't expect there will be this time either

Meta: please consider making multiple top level comments instead of one (excellent!) multi-topic comment. The responses get disorganized as-is.

We have lots of programmers and aspiring programmers on here. Is there interest in my hosting a (possibly regular) "office hours" via discord etc? Career advice, code review, anything I can help with. --FAANGer

Edit: link is http://meet.google.com/ieq-zixv-xwf

On the plausibility of Mars Bases vs that of AI

Responding to @FeepingCreature from last week:

Out of interest, do you think that a mars base is sci-fi? It's been discussed in science fiction for a long time.

I think any predictions about the future that assume new technology are "science fiction" p much by definition of the genre, and will resemble it for the same reason: it's the same occupation. Sci-fi that isn't just space opera ie. "fantasy in space", is inherently just prognostication with plot. Note stuff like Star Trek predicting mobile phones, or Snowcrash predicting Google Earth: "if you could do it, you would, we just can't yet."

That was a continuation of this discussion in which I say of AI 2027:

It is possible that AGI happens soon, from LLMs? Sure, grudgingly, I guess. Is it likely? No. Science-fiction raving nonsense. (My favorite genre! Of fiction!)

As to Mars:

Most of what I know here comes from reading Zach Wiener-Smith (of SMBC)'s A City on Mars. It was wildly pessimistic. For a taste, see Gemini chapter summaries and an answer to:

"Given an enormous budget (10% of global GDP) and current tech, how realistic is a 1 year duration mars base? an indefinite one? what about with highly plausible 2035 tech?"

I agree with the basic take there, both as a summary of the book and as a reflection of my broader (but poorly researched) understanding/intuition of the area: Mars is not practical. We could probably do the 1 year base if we don't mind serious risk of killing the astronauts (which, politically, probably rules it out. Maybe Musk will offer it as a Voluntary Exit Program for soon-to-be-ex X SWEs?)

My main interesting/controversial (?) take: there is an important sense in which Mars bases are much less of baseless scifi nonsense than AI 2027.

Mars is a question of logistics: on the one hand, building a self-contained, O2 recycling, radiation hardened, etc, base requires tech we may (?) not quite have yet. On the other hand, it strikes me as closer to refinements of existing tech than to entirely new concepts. Note that "enormous budget" is doing a lot of work in here. I am not saying it is practical to expect we will pay to ship all of this to Mars, or risk the lives, just that there is good reason to believe we could.

AI is a question of fundamental possibility: by contrast, with AI, there is no good reason to think we can create AI sufficient to replace OpenAI-grade researchers with forseeable timelines/tech. Junior SWEs, maybe, but it's not even clear they're on average positive-value beyond the investment in their future (see my previous rant about firing one of ours).

I don't understand how anyone can in good faith believe that even with an arbitrary amount of effort and funding, AGI, let alone ASI, is coming in the next few years. Any projection out decades is almost definitionally in the realm of speculative science-fiction here. Even mundane tech can't be predicted decades out, and AI has higher ceilings/variance than most things.

And yet, I am sensitive to my use of the phrase "I don't understand." People often unwittingly use it intending to mean "I am sure I understand." For example: "I don't understand how $OTHER_PARTY can think $THING." This is intended to convey "$OTHER_PARTY thinks $THING because they are evil/nazis/stupid/brainwashed." But, the truth of their cognitive state is closer to the literal usage: they do not understand.

So, in largely the literal sense of the phrase: I do not understand the belief in and fear of AI progress I see around me, in people I largely respect on both politics and engineering.

to the absurd ("speeding is actually safer because a vehicle that isn't keeping up with traffic causes more accidents when people try to pass').

I have always assumed this is true. The famous graph of it is called the Solomon curve, showing that the lowest rate of accidents occurs slightly over the mean speed of traffic. It's from 1960, so take it with a larger grain of salt than most studies even, but I don't see why it's an "absurd" claim that this is true.

Doing some further research, what I'm seeing is that the rate of accidents is, as per Solomon, lowest at the speed of traffic. But, that the fatality risk and injury severity if you are in an accident increase with speed. This makes it a non-obvious EV-maximization problem to answer what speed to drive at.

It is absolutely plausible that accident rate varies with # of cars passing you (or that you pass). My mental model is that the safe thing is to go the same speed as the cars in your lane. In principle if that were faster than road conditions allow (rainy, curvy, but somehow left lane is still doing 85), it's an unsafe lane - but probably still safer to travel at the speed of those around you.

I'm open to the idea that going at the +10 found in slower lanes is safer than going at the +20 found in the faster lanes. But, I think "going the speed limit is safer, in any lane" is an extraordinary claim requiring extraordinary evidence.

I'm curious, Mottizens: what speed would you drive at in perfect conditions (straight, flat, sunny, minimal traffic), in a 70 mph interstate?

Landing legs are heavy and any mass you lift up lowers payload

Notably, the relationship is exponential, so lowers it more than you would expect.

The laziest possible search gives me 550,000kg for the rocket, 8,000 kg for payload, and 2,000kg for the landing legs, so 0.33% of total but an astonishing 25% of payload. Given the exponential relationship, even the 0.33% could have real impact, so 25% is somewhere between "holy shit" and "did lagrangian mess up the math and/or use poor data"?

Is it even remotely possible that she did not already know her reputation? She is pretty literally an attention whore, in addition to the normal kind, and this is more of the same. She talks extensively about the efficacy of advertising herself, on her substack.

We finally fired the guy. (I'm a SWE at FAANG.) I'm so relieved. You would not believe how much time and documentation it took. I'll estimate 20 pages of explaining his mistakes, not counting the code itself, over the course of six months. I have no idea how much time and energy our manager had to put into it - probably more than me. After 3.5 years, he was at what I consider 1-1.5 years of skill. How the hell he got promoted, I do not know.

I got asked to be his lead (kill me), which is good for my shot at promo to senior (results tomorrow!), so obviously I said yes. I immediately start complaining. Our manager doesn't see the problem. After a couple months of casual complaining (read: spending ~half my highly-valued weekly 1:1 sharing examples), I put together a meticulous spreadsheet. He sees the problem and says Junior needs to rapidly improve or will be fired. Junior makes no progress. Junior insists he is making great progress. Four months later, Junior is offered a severance or PIP and, in his first display of real intelligence, takes it.

Two of my favorite mistakes:

  1. He asserted that my code review idea to use an interface was conceptually impossible because when he tried it, the compiler said "attempting to assign weaker access privileges; was public," which apparently he found inscrutable. Solution: change the impl to also be public.
  2. On a project whose entire point was "set a timestamp, then filter stuff out if it's before that time," he set the timestamp wrong. It was set to the epoch. He had no tests that caught this. After this was pointed out, he pushed a new commit that was also wrong (differently). Twice. After four months on this project, he almost finished - the smaller half of it, which should have taken 2-3 weeks.

I have complained about this ongoingly to everyone I know for months. It was getting to be a problem. Work is so much chiller now. I can literally see the day he got fired in my sleep tracking metrics, as everything discontinuously improved.

What drove me the craziest was that my manager, reasonably, was too discrete to be straight with me about his agreement. I'm not sure at what point I really won him over. This left me chronically feeling unsure if, in the eyes of He Who Writes My Reviews, I was nitpicky and disagreeable, shitting on a coworker who he thought was just fine. Thankfully, the ending of the story strongly suggests he didn't think that, but it's still unclear if it hurt my reputation.

Or helped it - I did just save the company over 1 SWE-yr/yr, in perpetuity.

The FairTax is

Could you say more about what the FairTax actually is? I'm at best dimly aware of it. Something like "a flat sales tax on all goods, with no other taxes (income, property)"?

So in comparing, say, this site to Reddit, there's probably some complex code for managing the orders of magnitude greater traffic that themotte just doesn't worry about?

Right. Zorba pays for the site out of pocket, but that is not scalable. The site occasionally goes down - we even lost most of a day of posts not too long ago. That's no big deal at our scale - just ssh in, figure out the bug, deploy something manually, etc.

But at e.g. Google scale, it's $500k/minute of gross revenue on the line in an outage, to say nothing of customers leaving you permanently over the headache. Fractions of a percent of optimization are worth big bucks. Compliance headaches are real. Hardware failures are a continual certainty.

Read about the brilliance behind Spanner, the database a huge amount of Google is built on: their own internet cables, because why choose between C[onsistency] and A[vailability] when you can just not have network P[artitions]?

You need an incredible degree of fault tolerance in large systems. If n pieces in a pipeline each have a probability p of working, the system has p^n - exponential decay.

Plenty of it is feature bloat, that said. You really can serve an astonishing amount of traffic from a single cheap server.

As others have pointed out: it is definitely causing some /priors updating/ that anybody takes this data even remotely seriously. This would be like polling a bunch of redditors about religion and then reporting on it as if this was a meaningful sample.

Right, that killed me. From Scott, emphasis mine:

I was surprised how certain people were that poly relationships were disasters that couldn’t work, compared to how little of a sign there was of that in the data. I like Aella’s explanation that most mono people’s experience of poly people is mono people “experimenting” with “opening up their relationship”, which is a natural danger zone. An alternative is that Aella got a bad sample (but her sample ought to be much more representative than mine), or that poly people lie / misremember / have a hard time answering surveys.

More representative, maybe, but still not even vaguely representative. "We only polled the people at the back of the church! How religious could they be??" I think Scott's ~asexuality just means he is typical minding really hard on these subjects.

Call airbnb support, get told to fuck yourself, call credit card company, do a charge back, and go back to booking hotels ever after.

non-e bike

Men really aren't built for monogamy, huh? [...] 3. The few who actually just disagree.

I'm certainly in 3. I think most men are, too. I barely have the time or interest to put up with/keep track of one woman at a time. I'll also take monogamy (and an IUD) over condoms and a harem. If I ever blow anything up for the cause, spare me the 72 virgins - I'll take one moderately slutty broad who know what the fuck she's doing and hates texting.

Cases like Greene's seem to vindicate me.

Do they? He's so far from a typical guy. I have an enormous amount of trouble understanding how anyone's response to any of this is "ah yes, let's post about my legal troubles on Youtube." I may be old.

I'm automating Hinge. Android emulator, pyautogui, PIL, GPT-4o. It's almost too easy.

The flow is:

  1. pyautogui: take a screen shot. With a 1:8 aspect ratio on the emulator this gets the whole screen. Earlier versions scrolled and stitched together, which mostly worked, but boy do I feel stupid not thinking of it sooner.
  2. AI: prompt to extract information from the info section (height, job, age, education, etc), an assessment of personality (nerdy, travel loving, high fashion, etc), and a physical description (weight, race, hair color), and an overall assessment of if she's my personality/physical type. (Note: the goal is nerdy but hates travel and isn't high fashion!)
  3. python: ignore literally all of #2 except the job and education. If either matches a whitelist of terms that signal smarts, proceed.
  4. PIL: split the screenshot into sections, using the like buttons.
  5. AI: transcribe (for prompts) or describe (for images) each section (separately) and provide a response. (My favorite part: I have it refer to her as The Candidate, which is how we have to write interview feedback at work.)
  6. AI: given all the transcriptions/responses, pick the best one.
  7. pyautogui: click heart button, type response, send.

Costs me about $0.04 to reject, $0.10 to message. I think I can get that down some. I only ran it for one batch, and it got a match faster than I normally do. Small sample size, but I am optimistic.

As to why #3 is so simple - I initially had a hand written weighted average of all the things, but looking at the actual behavior realized that:

  1. Really all I care is that she's smart and not terribly fat
  2. GPT-4o is not good at telling me if she's fat, so far.

Favorite kerfuffle: it messaged a woman, shown in a photo by a giant 10ft novelty planted pot: "is that enormous, or are you tiny"? It...was certainly not the latter.

This raises some questions for me:

  1. Do I let it do more than the first message? Probably not - it's just the endless swiping/messaging into the void I dislike. Conversion rate match -> date is tolerable.
  2. Could I let it literally do everything up to and including putting a calendar event on my calendar? Probably so - I'd say it'd cut my conversion rate in half.
  3. Do I admit I'm doing this? n=1, but I did, immediately, and it went well for me.

This week, in review:

High: had a first date that I think I'm actually more excited about than Ms. Definitely, and there will certainly be a second. Not quite as much in common, but still a lot, still brilliant and attractive, and just...better vibes. More stable and peaceful.

Low: buried the dog.

I mean, you can reasonably infer, but just in case - I bought them on Amazon and put them on my penis and that into the woman.

I've been seeing a woman for two months now. Both early 30s. She's very busy with work (which often takes her out of town), but even so, we've had an all-day date each weekend. She's finally in town during the week, so we're getting dinner tonight. I think I'm going to ask her to be my girlfriend. We talked about things a couple dates ago and established that we're not seeing other people. I feel like exclusive and girlfriend aren't even very different, but a little. More expression of intent to grow things. It's not that we discussed and avoided bf/gf labels last time, we just didn't talk about it.

It's been a minute since I've gotten this far into things with someone. (Four years ago, a three month relationship; seven years ago, 7 months; some fucking the ex mixed in there...) And the last couple times I have, I felt like I had more sense that there was a reason things wouldn't work out. This one actually seems plausible, which is scary. It's not entirely clear to me how I feel about her, which is reasonable for two months. But it just makes me feel guilty, like am I just saying things and following steps because it seems like the thing to do (and also, sex)?. I don't think so, I really think there's something here, but feelings are confusing. Who knew.

We've both basically lived alone and had intense jobs forever. It's weird imagining fitting someone into my life, maybe even living with someone eventually. Weirder, I think I like the idea. This just in: even programmers are social mammals.

Also weird is that I'm in good shape and have money now. I've never been terrible on either front, but man, does seem to be helpful. How to split expenses is an awkward thing I haven't figured out yet, and I just don't care that much. She seems quite well off actually, for not being a programmer/doctor/lawyer. Also, hot. And not crazy. Didn't seem to particularly care for my briefly discussing the motte style politics, but also just doesn't seem to care about politics or generally be terminally online, which might be better than agreeing with me in the first place...

Yeah it didn't help. I just try to remind myself that it's OK for someone to be great at some things and hilariously, hopelessly naive about others.

It is possible that AGI happens soon, from LLMs? Sure, grudgingly, I guess. Is it likely? No. Science-fiction raving nonsense. (My favorite genre! Of fiction!)

Scott's claim that AGI not happening soon is implausible because too many things would have to go wrong is so epistemically offensive to me. The null hypothesis really can't be "exponential growth continues for another n doubling periods." C'mon.

Got the promo! 6.5% raise, although it'll be more over the next 5 years as the 4 year stock grants (each March) come in at senior size, and the base will continue to go up, etc. Maybe 50% total, long term.