domain:dynomight.net
That has happened a few times, but has not yet deterred him. He does generally accompany his "I asked $model and it says" statements with an acknowledgement that one needs to check because it might be hallucinating, but so far it hasn't really changed his habit to always ask AI first on every single topic.
Yes, I won't tell you not to have Jose from the home depot parking lot/Oaxaca put a fart fan in your bathroom instead of having an HVAC company subcontract an electrician, roofer, and a drywall contractor. But for a major job there is a reason you want a licensed contractor. If you have drainage problems or need an entire HVAC system replaced or you need a new circuit on your panel and you aren't comfortable with DIY you need somebody with experience in that particular trade.
Jose can replace a p-trap. I'm not saying every job that requires a license needs to require a license. But licenses exist for a reason.
It's certainly true that human output can be incorrect. But it's incorrect at a much lower rate than an LLM is, assuming you ask a human who knows the topic. But that aside, it seems to me like "have you asked AI" is the 2025 equivalent of "let me Google that for you", and is just as annoying as that was. If I trusted an AI to give me a good answer I would just ask it, I don't need someone else to remind me that it exists.
If you're driving to avoid a 10 minute walk, it better be December in Minnesota.
Or August in Texas.
So I've probably been "your boss" to someone a couple of times. There are essentially three stages:
- LLMs don't really work
- LLMs work amazingly; you should use them for everything
- I've outsourced too much of my creative thought and problem-solving to LLMs, and need to come up with my own answer first before asking it anything.
In October 2025, most people should be on step 2 or 3. If you have a ton of coworkers on Step 1, your boss has a responsibility to model being on step 2.
You can perhaps get him to lay off of you, individually, by explaining you're on step 3. The people who remain on step 1 are being stupid and inefficient. I lost patience with the people who come to me with questions I can obtain in seconds a long time ago. The ones on step 2 are being one-shotted and need to get a grip.
Another tactic is that when you're sending people AI-generated content and only asking if they've asked AI instead of answering it, you're implicitly not respecting their time. If someone is communicating to you from human-to-human and you're dismissing their question or putting an LLM between you, it's a sign of disdain.
Ironically, I'm dealing with LLMs being integrated into our career management platform and having the same problem in reverse. My subordinates are writing their reviews for themselves and each other with AI. I'm spending hours per month having to comb through this verbose slop, synthesize it with reality, and create thoughtful, specific feedback for everyone. It's pretty fucking lame.
I have also seen a lot of the managers at my corporate job become AI-obsessed. If you figure out how to make it stop, let me know. It's incredibly frustrating, especially when they double and triple your output goals by claiming AI makes everyone 2 or 3x as efficient...
Depends on exactly where you live. Bear in mind for most Americans it's forty celsius outside for months at a time, so 'walking' is not quite the same thing as in Europe.
I can walk to two grocery stores near me. I grew up being able to bike to a grocery store and a convenience store- and I see the neighborhood preteens biking to QT for slurpees all the time. But most Americans drive to the store. So it's probably partly cultural.
I don't know if this counts as "tactful", but I got my boss to stop doing that by repeatedly pointing out errors in the LLM's output. After a few months, he got tired of being told that whatever source file it was talking about didn't exist, and now he only posts LLM output after verifying it, which is much less annoying.
While Houston's lax (lacks?) zoning laws have arguably been successful at keeping the rent reasonable, it does get lots of criticism for its urban design and walkability. Amusingly, people do cite its (non-housing price) approach to homelessness as working better than most.
Boston, NYC, DC, Miami, Austin, Phoenix, LA, SD, SF, Portland, Seattle problem.
Austin has built so much housing that it's the only metro to have seen rents decline in recent years.
I am unconvinced of that. First, the hard problem of consciousness is much more a thing among philosophers than among the relevant domain experts (neuro-scientists).
So, do you deny that the hard problem exists and is indeed a problem? Because from a straightforward logical point of view, it's one of the most impossible gaps for materialism to cover. How do we perceive or think at all, if we're fully material?
Secondly, even if I grant you that people have souls which give them qualia, unlikely as that seems, there is no reason to suppose that they are forever beyond the reach of physics.
There is even less reason to think that "souls" or a non-material substrate is in reach of our physics. Also, even if we could find a definitive physical cause for consciousness, that still would not mean materialism is true! As David Bentley Hart says...
One of the deep prejudices that the age of mechanism instilled in our culture, and that infects our religious and materialist fundamentalisms alike, is a version of the so-called genetic fallacy: to wit, the mistake of thinking that to have described a thing’s material history or physical origins is to have explained that thing exhaustively. We tend to presume that if one can discover the temporally prior physical causes of some object—the world, an organism, a behavior, a religion, a mental event, an experience, or anything else—one has thereby eliminated all other possible causal explanations of that object. But this is a principle that is true only if materialism is true, and materialism is true only if this principle is true, and logical circles should not set the rules for our thinking.
Paging @FCfromSSC if he wants to go more deeply into the arguments against materialism. Here is an example of him arguing about free will, for instance.
...Unless he knows something we don't.
Of course going off priors we'll discover some drug habits instead.
Most germanic european countries are very conformist societies where state force is used against those who buck the trend. They're just not enforcing the values that people who complain about 'conformity' tend to dislike, they're enforcing a different set.
If there's a country where the average person has more freedom than the US, it's probably some Latin American country where the government has to pick and choose what it uses its state capacity on.
I own a goddamn OLED 4k HDR high refresh rate TV
Which you use as a monitor, right? How are you getting on with that? I nearly copied you but balked at the price.
Sure, but his speculations on the antichrist don't correspond well to actual Christian apocalyptic prophecy. I can see the guy being methodist or episcopalian or something where you believe Jesus Christ was God, died for our sins, and was resurrected, but not necessarily a whole lot else. On the other hand he's pretty clearly not a Catholic or Orthodox, and the kind of protestants who take this stuff literally won't have him.
Yeah it's very true... not sure what Thiel's endgame is. He's quite obviously very Straussian so, he could just have layers of obfuscation around his "real" plan, who knows.
I will grant you that once you have accepted that the AI safety people are just a silly doomsday cult, you can compare and contrast them with other silly doomsday cults such as early Christianity.
Ahh, so from this statement if I'm being honest, you come off as having these views and sort of faking incredulity when in reality you simply have disdain for Christianity and aren't really interesting in seriously understanding Thiel's points.
Still, I think that if the antichrist is just a metaphor, he goes into incredible detail about the specifics.
Thiel is positing potential ways in which the antichrist could manifest into our world, not giving actual specifics he's more exploring the problem. Again, I'm not a Thiel-stan I don't agree with his theology, but given the follow up to this sentence, you're very much pattern matching a snarky atheist here lol. I'm not surprised you're not engaging with his metaphor, because from my perspective you're basically reading "antichrist" and going "oh this guy is just another religious idiot, anything he says must be bunk."
For instance, Jesus does indeed go into many specifics in his parables, calling out specific groups like the Pharisees, Samaritans, etc etc. For the parable of the mustard seed, He even goes into specifics of soil quality! Metaphors often employ specifics that are relevant to the audience.
Technology stagnating will not mean the end of technological society. The fall of West Rome did not mean that people went back the the bronze age, after all. If technology stagnates to the point where kids will use the same computers as their parents used when they were kids, that is bad news for investors like Thiel, who depend on exponential growth (which in reality is often really and S-curve whose tail you have not reached).
The general argument from stagnationists is something like, technological progress and increase in wealth keep the hoi polloi happy and sedate, if they stop getting their increase in goodies and wealth they will become angry, and eventually revolt. This revolt will effectively destroy technological society and take a while to build back up, if ever.
I'm not particularly convinced by it, but there is a logic there.
The term usually includes mainline and most disorganized protestants, who may or may not have bishops.
Sturgeon's law really does apply to every genre of media in existence. AI generated videos are probably 99.9% slop, because of ease of generation compared to the old way of filming and recording, and because the average user has negative taste. It'll only get better, those lambasting it because of a lack of consistency, weird physics, poor audio etc will be in for a bad time when it's all fixed. Who am I kidding? They'll probably retreat to every more nebulous concerns such as "effort" or "artistic intention".
Yeah, as I said in my comment below the OP here is doing a sort of maximally uncharitable reading. From subsequent responses, he clearly has contempt for Christianity and Thiel so, not shocking.
In a mathematical sense you can't simultaneously maximize two preferences unless they have a perfect correlation of 1.
Suppose we give this person a choice. Option 1 will make others very happy and well off and prosperous. Very very happy. It's basically a lifetime worth of doing good in the world. But will cause this person to lose all of their wisdom. They will be unwise and make bad decisions the rest of their life. The total good from this one decision is enough to make up for it, but they will sacrifice their wisdom.
Option 2 will not make people happy, but will make the person very wise in the future. They can spend the rest of their life making good decisions and making people happier via normal means, and if you add it all up it's almost as large as the amount of good they could have done from Option 1, but not quite. But they will be wise and have wisdom.
The kindest most loving thing to others is to choose option 1. The most hedonic desire for a person who values wisdom in its own right in addition to loving others is Option 2. Depending on how you balance the numbers, you could scale how good Option 1 is in order to equal this out against any preference strength.
U(A) = aX_1+bY_1
U(B) = aX2+bY_2
Where a and b are the coefficients of preference for loving others vs loving wisdom, X and Y are the amount of good done and wisdom had in each scenario. For any finite a,b =/= 0, this has nontrivial solutions, which implies either can by larger. But also for any finite a,b =/= 0 you can't really say both have been "maximized" because one trades off against the other.
I watch the majority of movies and shows on my phone. I'm also not a heathen, so I use decent enough headphones or earphones. The only time my phone speaker gets any use is when I'm in the shower.
And it's perfectly fine. I own a goddamn OLED 4k HDR high refresh rate TV, it's not like I don't have options. My phone also has a large, high res, HDR HRR display, and - when it's held up at a comfortable distance - it takes up enough of my visual field to give a comparable experience. And the taller refresh rate means less letterboxing when in landscape watching things shot wider than 16:9.
I'm not missing out on anything, and the convenience alone is well worth it.
He already has the downside risk of losing his job.
That’s not really a downside risk (i.e., a risk of negative payoff), that’s just a risk of getting zero payoff.
Yes, sure, fine, if you account for opportunity costs, then losing a CEO job might be net negative (depending on base salary, length of and compensation during a post-termination non-compete period, if any, etc.—and, of course, on the value of the next-best alternative to being CEO)
But there is still a principal-agent problem here. The shareholders want (or should want, under homo economicus assumptions*) the CEO to be an agent who only takes +EV actions, where the “V” in “EV” is “market cap”. The more diversified the CEO personally is, the less he will personally care about declines in the value of the company’s equity—sure, if he makes some decisions which go south, then his equity compensation from this job might only be good for toilet paper, but if he’s already amassed a generational fortune and socked it away in a well-diversified portfolio, then a bet which is zero or negative expected value for the shareholders might very well be positive expected utility for the CEO. It’s just like how you’re much more inclined to go for a YOLO all-in with a questionable hand in poker when playing with Monopoly money than when playing with real money.
*There are some interesting ways in which homo economicus incentives break down when the shareholders themselves are all massively diversified; in the extreme case (which may no longer be all that extreme, now that everyone and his mum has piled into market cap-weighted index funds), everyone has the exact same equity portfolio, so all shareholders of company A are also shareholders of all of its competitors (B, C, D …). In such a world, it no longer makes sense for company A’s CEO to prioritize increasing market cap by any means; if he increases A’s market cap at the expense of B’s, the shareholders are no better off! But that’s a story for another time.
And that's why people hire general laborers or do it themselves. If you have to hire an electrician, a plumber, a roofer, and a drywall contractor to put in a simple bathroom fan, it's going to cost you thousands.
You can reduce the number/duration of total car trips if you manage to densify the other infrastructure too: if your towering apartments are walking distance (within a block or two?) of the grocery store, bar, gym, or employer. Probably not to zero, but it'd help.
More options
Context Copy link