Is it even remotely possible that she did not already know her reputation? She is pretty literally an attention whore, in addition to the normal kind, and this is more of the same. She talks extensively about the efficacy of advertising herself, on her substack.
any other fair ideas
Fine companies a multiple of the wages paid to illegals. This pays for itself. Do it aggressively. Illegals will not want to be here if there is not work for them to do, so they will self deport.
That is, as always: align incentives. Don't try to make people do what you want. Make people want what you want.
Call airbnb support, get told to fuck yourself, call credit card company, do a charge back, and go back to booking hotels ever after.
to the absurd ("speeding is actually safer because a vehicle that isn't keeping up with traffic causes more accidents when people try to pass').
I have always assumed this is true. The famous graph of it is called the Solomon curve, showing that the lowest rate of accidents occurs slightly over the mean speed of traffic. It's from 1960, so take it with a larger grain of salt than most studies even, but I don't see why it's an "absurd" claim that this is true.
Doing some further research, what I'm seeing is that the rate of accidents is, as per Solomon, lowest at the speed of traffic. But, that the fatality risk and injury severity if you are in an accident increase with speed. This makes it a non-obvious EV-maximization problem to answer what speed to drive at.
It is absolutely plausible that accident rate varies with # of cars passing you (or that you pass). My mental model is that the safe thing is to go the same speed as the cars in your lane. In principle if that were faster than road conditions allow (rainy, curvy, but somehow left lane is still doing 85), it's an unsafe lane - but probably still safer to travel at the speed of those around you.
I'm open to the idea that going at the +10 found in slower lanes is safer than going at the +20 found in the faster lanes. But, I think "going the speed limit is safer, in any lane" is an extraordinary claim requiring extraordinary evidence.
I'm curious, Mottizens: what speed would you drive at in perfect conditions (straight, flat, sunny, minimal traffic), in a 70 mph interstate?
I mostly agree with the broad point, but on a pedantic note - I think you probably mean "LLM or diffusion model"
non-e bike
This is what happens when you have a constitutional right that a sufficient number of states simply choose not to recognize as such; look at how many southern states kept passing more and more onerous abortion restrictions to get around Roe
This comparison irritates and mystifies me.
The right to bear arms is quite directly in 2A:
A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed
But the right to abortion is...nowhere. It's inferred from the right to privacy, which is inferred from due process (5/14A):
No person shall ... be deprived of life, liberty, or property, without due process of law
I'll grant that there's some legal history and subtlety around what counts as an "Arm," but that's a much smaller inferential distance than the above.
Why would "abortion, but only up to a certain point in the growth" be part of...I guess "liberty"? But, "drug legalization" somehow isn't?
In classic Mottian fashion, I'm a high decoupler in general, and on this - I'm personally anti-gun and pro abortion. But, that doesn't change that the legal footing of them is exactly opposite in strength: my desires are not constitutionally protected.
There's a data quality issue in here, and I'm not sure on which side. Scott's "annual deaths" graph shows a sharp uptick for Covid. Yours does not.
There's also the "harvesting" effect - many people who died from Covid did not have long left. I am most interested in what effect Covid has on the 10 year moving average of total deaths.
maybe "continuity" is the word you're looking for?
Do we know how good it is at building coherent multi-scene videos? Can I have the same two people in the same room, from a different angle? Ideally in a long continuous shot, but even after a cutover would be amazing. Otherwise, this is pretty limited in utility for entertainment media - maybe commercials.
But either way, it's enough to be a problem for trusting video. It's like the world envisioned in The Truth Machine, in which everyone tells the truth. Everyone becomes highly trusting, and life is good. Only with AI video-gen, it's inverted: everything could be lies, so no one believes anything, so life is terrible. Fun.
Oops, corrected. 10% is a harder sell, but the general point stands. Knock it back to every-other-week for 5% then.
(Also how the heck can people be making that little?)
Yeah. Slightly less crazy if you look at HCoL, e.g. 95k for California or 141k for San Francisco, but of course then your nanny cost would go up. I suspect $35/hr will do nicely in most of California, but haven't looked into it.
There's a lot of middle ground between "unaffordable except for the hyper rich" and "just skip your starbucks sometimes and you too can have it."
E.g. once a week for four hours is ~50*4*35 = $3500 7000/yr - considerably less than many people spend on vacation or dining out. I think Scott's point is more that he was failing to acknowledge that even that level was possible for him. Even if you drop that to once a month, it's still a real quality of life change to be able to recharge somehow without the kid as needed.
Of course there's something to be said for living near family and not needing to pay for this, but that's a harder option to make possible for many people than budgeting for occasional help.
No. We have an HOA. I have zero complaints about them being too strict. I wish they were slightly stricter. E.g. a neighbor has a giant LED american flag just inside a garage window. This is technically not HOA violating because it isn't outdoor lighting.
How about MIT or Caltech?
What'd be really fun is if we could also access performance reviews over time, to better assess job performance rather than interview performance.
I completely agree. This is exactly what I tried to say a couple weeks ago, but better written and less inflammatory. Thanks for taking the time.
You write like you're an AI bull, but your actual case seems bearish (at least compared to the AI 2027 or the Situational Awareness crowd).
I was responding to a particularly bearish comment and didn't need to prove anything so speculative. If someone thinks current level ai is cool but useless I don't need to prove that it's going to hit agi in 2027 to show that they don't have an accurate view of things.
I think this gets at a central way in which I've been unclear/made multiple points.
First, some things that I think, but are not my key point:
- Reasonably plausible (>25%): AI will be used commonly in sober business workflows within a few years.
- Not very likely, but still a reasonable thing to discuss (5%): this this will take jobs away en masse within a decade, or similarly restructure the economy.
Why not likely: spreadsheets sure didn't. It might take away a smallish number, but technology adoption has always been so slow.
Why reasonable to discuss: this is fundamentally about existing AI tech and sclerotic incentive structures in the corporate world, both of which we know enough about today to meaningfully discuss.
And finally, my key point in this discussion:
3. Baseless science-fiction optimism: extrapolating well past "current tech, well-integrated into workflows" is baseless, "line super-exponential goes up," science-fiction optimism. Possible? I guess, but not even well-founded enough to have meaningful discussion about. Any argument has to boil down to vibes, to how much you believe the increasing benchmarks are meaningful and will continue. E.g., if we throw 50% of GDP at testing the scaling hypothesis, whether it works or not, all we will be able to say (at least for a while, potentially forever) is: huh, interesting, I wonder why.
I'm under the impression that terraforming is much more scifi than most approaches. Is that not the case?
I'd still bet it's easier to achieve than AGI, let alone ASI, but I think it's more in the "speculative sci-fi" bucket with them, not in the "expensive and economically disincentivized" bucket with "radiation hardened dome with a year worth of Soylent powder on Mars" one.
Do you agree that capabilities have progressed a lot in the last few years at a relatively stable and high pace?
Yes and no. Clearly, things are better than even three years ago with the original release of ChatGPT. But, the economic and practical impact is unimpressive. If you subtract out the speculative investment parts, it's almost certainly negative economically.
And look - I love all things tech. I have been a raving enthusiastic nutjob about self-driving cars and VR and - yes - AI for a long time. But, for that very reason, I try to see soberly what actual impact it has. How am I living differently? Am I outsourcing much code or personal email or technical design work to AI? No. Are some friends writing nontrivial code with AI? They say so, and I bet it's somewhat true, but they're not earning more, or having more free time off, or learning more, or getting promoted.
Do you agree that it's blown past most of the predictions by skeptics, often repeatedly and shortly after the predictions have been made?
Again, yes and no. Yes: Scott's bet about image generation. The ability to generate images is incredible! I would have never thought we'd get this far in my lifetime. No: anything sufficient to really transform the world. I have not seen evidence that illustrators etc are losing their jobs. I would not expect them to, any more than I would have from photoshop. See also Jevon's Pardox.
I think that is the crux of our disagreement: I hear you saying "AI does amazing things people thought it would not be able to do," which I agree with. This is not orthogonal from, but also not super related to my point: claims that AI progress will continue to drastically greater heights (AGI, ASI) are largely (but not entirely) baseless optimism.
Are there even in principle reasons to believe it will plateau before surpassing human level abilities in most non-physical tasks?
Nothing has ever surpassed human level abilities. That gives me a strong prior against anything surpassing human level abilities. Granted, AI is better at SAT problems than many people, but that's not super shocking (Moravec's Paradox).
Are there convincing signs that it's plateauing at all?
The number of people, in my techphillic and affluent social circle, willing to pay even $1 to use AI remains very low. It has been at a level I describe as "cool and impressive, but useless" forever. I will be surprised if it leaves that plateau. Granted, I am cheating by having a metric that looks like x -> x < myNonDisprovableCutoff ? 0 : x
, where x is whatever metric the AI community likes at any given point in time, and then pointing out that you're on a flat part of it.
If it does plateau is there reason to believe at what ability level it will plateau?
No, and that's exactly my point! AI 2027 says well surely it will plateau many doublings past where it is today. I say that's baseless speculation. Not impossible, just not a sober, well-founded prediction. I'll freely admit p > 0.1% that within a decade I'm saying "wow I sure was super wrong about the big picture. All hail our AI overlords." But at even odds, I'd love to take some bets.
The universality theorems don't say that it's possible with any remotely practical number of weights, even aside from training time. But yes, I do grant that they say it is possible in theory.
To even achieve GPT2 performance with a basic, non-recurrent neural net, I would not be surprised if you need > # of atoms in the universe weights, which clearly isn't physically possible. (Ok, you can maybe theoretically have > 1 weight per atom, but s/atom/gluon, or just don't take me super literally on "atom".)
Could they, in the arid sense that there is some unknown collection of weights that would be capable of outputting tokens that simulate an OpenAI researcher working on novel tasks? Absolutely
Why so confident? A 10 dimensional best fit line obviously won't work, nor will a vast fully connected neural net - so why should an LLM be capable?
On the plausibility of Mars Bases vs that of AI
Responding to @FeepingCreature from last week:
Out of interest, do you think that a mars base is sci-fi? It's been discussed in science fiction for a long time.
I think any predictions about the future that assume new technology are "science fiction" p much by definition of the genre, and will resemble it for the same reason: it's the same occupation. Sci-fi that isn't just space opera ie. "fantasy in space", is inherently just prognostication with plot. Note stuff like Star Trek predicting mobile phones, or Snowcrash predicting Google Earth: "if you could do it, you would, we just can't yet."
That was a continuation of this discussion in which I say of AI 2027:
It is possible that AGI happens soon, from LLMs? Sure, grudgingly, I guess. Is it likely? No. Science-fiction raving nonsense. (My favorite genre! Of fiction!)
As to Mars:
Most of what I know here comes from reading Zach Wiener-Smith (of SMBC)'s A City on Mars. It was wildly pessimistic. For a taste, see Gemini chapter summaries and an answer to:
"Given an enormous budget (10% of global GDP) and current tech, how realistic is a 1 year duration mars base? an indefinite one? what about with highly plausible 2035 tech?"
I agree with the basic take there, both as a summary of the book and as a reflection of my broader (but poorly researched) understanding/intuition of the area: Mars is not practical. We could probably do the 1 year base if we don't mind serious risk of killing the astronauts (which, politically, probably rules it out. Maybe Musk will offer it as a Voluntary Exit Program for soon-to-be-ex X SWEs?)
My main interesting/controversial (?) take: there is an important sense in which Mars bases are much less of baseless scifi nonsense than AI 2027.
Mars is a question of logistics: on the one hand, building a self-contained, O2 recycling, radiation hardened, etc, base requires tech we may (?) not quite have yet. On the other hand, it strikes me as closer to refinements of existing tech than to entirely new concepts. Note that "enormous budget" is doing a lot of work in here. I am not saying it is practical to expect we will pay to ship all of this to Mars, or risk the lives, just that there is good reason to believe we could.
AI is a question of fundamental possibility: by contrast, with AI, there is no good reason to think we can create AI sufficient to replace OpenAI-grade researchers with forseeable timelines/tech. Junior SWEs, maybe, but it's not even clear they're on average positive-value beyond the investment in their future (see my previous rant about firing one of ours).
I don't understand how anyone can in good faith believe that even with an arbitrary amount of effort and funding, AGI, let alone ASI, is coming in the next few years. Any projection out decades is almost definitionally in the realm of speculative science-fiction here. Even mundane tech can't be predicted decades out, and AI has higher ceilings/variance than most things.
And yet, I am sensitive to my use of the phrase "I don't understand." People often unwittingly use it intending to mean "I am sure I understand." For example: "I don't understand how $OTHER_PARTY can think $THING." This is intended to convey "$OTHER_PARTY thinks $THING because they are evil/nazis/stupid/brainwashed." But, the truth of their cognitive state is closer to the literal usage: they do not understand.
So, in largely the literal sense of the phrase: I do not understand the belief in and fear of AI progress I see around me, in people I largely respect on both politics and engineering.
I love TheMotte, but it theoretically could be replaced. There are lots of smart people in the world who you can have smart people discussions with. But there's only one 4chan.
Wow, that is a fascinating take. Say more? How could I find worthwhile engagement with 4chan? I haven't tried almost at all in over a decade, but my experience with it was "garbage shit posts," whereas I find TheMotte to be a singular place on the internet. (Even the UI is very nearly ideal!)
Plausible to happen at all: intelligence can be created - humans exist. It doesn't follow that it can be created from transistors, or LLMs, or soon - but these are all plausible, i.e. p > epsilon. They are all consistent with basic limits on physics and information theory afaik.
Science-fiction raving nonsense: but, there is absolutely insufficient reason to be confident they are going to happen in the next few years, or even the next few decades. Such beliefs are better grounded than religion, but unclear to me if closer to that or to hard science. They most resemble speculative science fiction, which has discussed AI for decades.
Probability is in the mind: I disagree. Probability is a concrete mathematical concept, used in many mundane contexts every day. Even the rat sense of the word ("90% confident that X") is reasonably concrete: a person (or process or LLM) with a well-calibrated (high correlation) relationship between stated probabilities and occurrence frequency should be trusted more on further probabilities.
Yeah it didn't help. I just try to remind myself that it's OK for someone to be great at some things and hilariously, hopelessly naive about others.
It is possible that AGI happens soon, from LLMs? Sure, grudgingly, I guess. Is it likely? No. Science-fiction raving nonsense. (My favorite genre! Of fiction!)
Scott's claim that AGI not happening soon is implausible because too many things would have to go wrong is so epistemically offensive to me. The null hypothesis really can't be "exponential growth continues for another n doubling periods." C'mon.
- Prev
- Next
No. I'm a high decoupler - I do in fact value some of her writing, including on dating. And presumably she is too. But, I still know her reputation, and I don't even have twitter. Surely she does too.
More options
Context Copy link