site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 110606 results for

domain:firsttoilthenthegrave.substack.com

Is it only Zorba that can set up the bot? I noticed that Transnational Thursday threads are automated.

Good post.

I do think your three questions are a little incomplete.

  1. Will we keep making consistent AI progress?
  2. Does sufficient AI progress translate to sufficient general progress?
  3. Will/can we align sufficiently-progressed AIs so they don’t turn us all into paperclips?
  4. How will aligned AIs be distributed amongst competing interests?

Even if (1) we stop making progress at 2x human, (2) that progress is limited to domains AI is already decent at, and (3) our new flood of intelligent, inorganic service workers is perfectly aligned…we can still get a wide range of results. My interests are not your interests are not Elon Musk’s interests. Maybe we agree 99% on things like “scarcity is bad,” but we aren’t going to be in lockstep. There has to be a negotiation step where we figure out how much our lead is worth. In a hard takeoff, it’s worth everything. In a softer one, it could buy nothing at all before rivals catch up.

In my opinion, the most likely branches include limited adoption: most competitors rejecting or failing to adopt an effective technology, giving a large advantage to a handful of more risk-tolerant ones. I find this most concerning for defense, a fundamentally conservative industry with some of the worst consequences for competition. The most risk-tolerant governments are not the ones I want to see gaining an edge!

This is kind of the crux of the AI 2027 project Scott shared recently. Not coincidentally, it also claims to have good answers to (1), though I didn’t really dive into their reasoning. I’m curious about your thoughts on Kokotajlo’s scenario.

Oh, yeah. Seems like people want small low-power computers to drive widgets rather than make an IBM PC from 1986 on a cocktail napkin.

Ideally all of this should could be end-to-end done with CAD software, or by writing out a .spec file and uploading it to a magic box or web site and then a microwave dings and it's ready. Or it arrives in the mail. Maybe one day...

Maybe it's possible to 3d print ghetto PCBs and also 3d print conductive filament and snap it into place?

In brief, you're probably fine where you're at. Keep the 80/20 rule in mind, you're already getting a lot of value out of the "basic bitch" level. If upgrading your tool doesn't directly solve a specific problem you're having, I wouldn't. There are infinity arguments for why upgrading could potentially allow you to do more things: if you don't have a plan for exactly what those more things are and how an upgraded LLM fits into that process, don't do it.

So? Those aren’t the people making long winded analyses of polyamory and the efficacy or lack thereof on the Motte dot org.

As someone that's worked for a FAANG for just about two decades, yes, everything about the above. The tech workers that most want to unionize are the ones that most want to wield that union in the service of extreme progressivism. The venn diagram of tech workers who have their pronouns in their signature and have at least once in their lives shouted "from the river to the sea" unironically, and those that are organizing unionization efforts in tech is indistinguishable from a circle.

It's possible that unionization could make life in software consultancy sweatshops like Deloitte somewhat better. As a FAANG employee, I do not feel that anyone on the tech side is being taken advantage of, and would much prefer fewer protections for the employees that cannot stop their fucked up psychological problems or childish entitlement to a job from causing constant strife at work.

I've been an artist in the video games world for more than a decade, but almost all of that time was spent on projects that used fairly lo-fi art, and early on I moved to a stripped-down modelling program. The project I'm current on is wrapping up, and I'm not sure I'll have a job once it does, so I want to brush up on my generalist skills in preparation for a job hunt. I've generally made guns and tanks and spaceships in my free time, but I've always wanted to get deeper into the tech, simply because it allows you to make cooler things; a gun is great, but a gun held by a character is better. A mech is great, but a mech with full texture and rigging stomping around an environment is better.

I've tried to learn Blender before, and it was going somewhat well, but I ended up going back to my old low-key modelling package for work, which killed muscle memory, and I never got back to it. Also, I don't think my approach was nearly organized enough to handle the complexity of Blender. This time, I'm taking a more serious approach:

  • First, I'm burning my ships. I'm making a personal commitment to never touch my old modelling package again. Blender or bust.

  • Second, I'm being much more structured about my approach to learning. There's a bunch of tutorials online for learning blender, but rather than just working through them, I've set up a google doc and am taking copious notes of hotkeys, techniques and so forth, essentially writing my own manual. This helps a ton with both retention, and with having a quick reference when I forget things.

  • Third, there's a ton of "quick tip" content online, showing random disconnected features of the program. I used to see this stuff and think "wow, that's neat", and then forget it. Currently, I'm collecting those links in the same doc, with the goal to work them into my notes and workflow when they become relevant.

Last time I tried learning Blender, the goal was to get to modelling as quickly as possible. This time I'm aiming more to learn the deeper toolset; it's obvious to me that I've wasted a huge amount of time doing things the hard way, when tools are available to make those things much easier, and that's bad both in a professional sense and an artistic one.

My current project is a gun; I've got a pretty good low-poly model, but I want a high-poly, normal bake, textures and simple function animations, the standard game-ready pipeline. I have a bunch of spaceships I'd like to finish the same way, and I've got some characters/environments/action scenes I'd like to try as well, plus a ton of other stuff; I've got a huge backlog of projects and Ideas that I've never had the chops to really execute on.

This tutorial is a really good introduction to the modelling end of things. It's a bit of a slog, but it handles setting up hotkeys and a number of addons for a more efficient workflow, goes through the basic modelling tools, and dips into materials, modifiers, a touch of simulation, and rendering. Be warned, some of the plugins he recommends (Jmesh in particular) currently don't work, and in some cases you'll need to figure out how to work around them.

This tutorial is a good follow-up, and is the one I'm currently working through. The modelling is more intense, and Jmesh being broken meant I needed to take a detour to learn some other methods for handling circular arrays, but it demonstrates a ton of useful techniques and approaches to a lot of standard modelling problems, and gives good working practice with the modifier system.

I've also been working through overviews of the shader and geometry node types, in preparation for texturing and environment generation. I've dipped a bit into some of the simple rigging, but want to tackle true high-poly > normal maps > texturing > rendering before diving into that.

Overall, the thing that makes me excited about Blender is seeing examples of how it seems to bust the general trend of overspecialization. videos like this one are pretty astonishing in terms of what is actually possible with a deeper understanding of the tools. The multiplication of effort is real, and I've got a wealth of experience of how limiting it is to be stuck with the old, slow way of doing things. An obvious example: I've spent probably hundreds of hours modelling rock over the years for various environments. I'm pretty sure I can spend maybe twenty hours learning the geometry nodes system, and then produce rock environments with ten times the fidelity in a small fraction of the time. I'm tired of doing it the stupid way.

Also, if you're comfortable posting any of your art, I'd love to see it!

@RandomRanger

Re: Schizo tariff policy

You argue that schizo tariff policies are not worse than controlled or orderly economic wrecking. What you complete fail to mention is why people would prefer orderly wrecking over schizo policy.

In short, making decisions at the levels Trump and many others do involves massive investment of time and resources. We're not talking about needing a day to make a decision, companies need notable warning so they can do analyses, inform relevant parties of upcoming changes, etc. One of the benefits of rule of glacial change to the legal system, for example, is that if you know the law once, you don't need to constantly keep a watch on it.

For example, consider a rational Trump supporter who agrees with the stated rhetoric that the tariffs will curb income taxes. Which world would they prefer? The one in which tariffs go on and off based on Trump's whims, or one where Trump gets Congress to put tariffs into place that reduces taxes for a set period of time going forward? I argue they would overwhelmingly support the latter, not the former.

Or consider the company that might support re-shoring factories based on tariffs. They can't build factories in days or weeks, it takes time to plan, acquire raw materials, and get the building(s) constructed. This is a process that needs multi-year guarantees, not the president's favor until his attention drifts to the next Owning The Libs moment.

And then there's the political ramifications. If Trump's policies flip on a dime, then there's a clear avenue for corruption - just say the right things and you get an exemption for your company. Most of us do not want policy done based on the president choosing winners or losers, and I suspect you don't either. And if you are going to cite some Democrats being corrupt, I will pre-empt you by saying that I will likely agree and say it shouldn't be done.

You look at the effects of various policies like DEI, mass migration, etc. and you conclude that since they caused more problems than Trump's current tariff flip-flopping, obviously the latter is not as harmful. But intent matters. Side-effects and second-order effects matter. You can appeal against DEI and immigration in formal ways, even if you lose that debate. The US carves out numerous formal ways to lodge your complaints. You can even convince enough people to support your views and get hand the winning side of a debate a legal loss by legislating their policy out of existence. You can't, however, appeal schizo policy in as remotely effective or fair a way.

Oh, and then there's the fact that this is just months into Trump II. We're still not fully seeing the impact these tariffs are going to have. Despite recent proclamations of those tariffs being dropped, they're still higher than they were before Trump took office. People are going to suffer under that policy when it was never necessary. And who knows what else Trump will come up with going forward?

Then there's the impact on politics as as a whole. An expression I hear from people on the right is that there's currently a stupid (Republican) party and evil (Democrat) party in the US. If there was any hope of doing better, then it lay in convincing the stupid party to do better so the evil one can be defeated. Trump was that chance, and he just turned the party stupider and squandered a great deal of political capital in the process. It's not impossible to come back from this, but I presume you'd rather be in the Nazi Germany strategic situation after D-Day then you would the strategic situation as the Soviets are encircling Berlin.

For (2) while we've seen some improvements, it's definitely not proven that current approaches will enable significant physical world interaction. A world where AI does all the remote desk jobs, but humans are still pulling cables is not out of the realm of possibility.

"The current environment of mania and hype shares a lot of traits in common with previous tech bubbles that ultimately failed to really pan out for one reason or another, like crypto, blockchain, NFTs, VR, Metaverses, augmented reality, 3D printing, etc."

Definitely does, but some hype bubbles do pan out (smartphone, social media, internet/ecommerce [with a bust along the way], arguably solar power).

Second @FtttG here, the only person who posts about Indian caste dynamics is yours truly, and only do so top provide context for Indian issues. I posted recently because of the likelihood of war, and before that, due to the caste census, an evil that most should be aware of.

My posting history is sparse. I did not post at all on the CW thread until a few months ago, as I mostly used the Wednesday threads to document my life. The caste posting here is pretty much non-existent. This place has a Silicon Valley bend, nothing beyond that. The evils we see with Bio Leninism and the ever-expanding centralising powers of states are meant to be warning signs about what heterogeneity looks like once it is accepted as the "new normal". A lot of the Irish CW posting, otoh sheds light on issues they face that do not get much attention despite the world being a place oversaturated with "news media".

I keep my India posting to a minimum and don't post as much general culture war stuff as I am not on Twitter as much nowadays, which is how I posted about James Lindsay's meltdown and how stupid the woke right is.

Themotte is still mostly Western, specifically US-centric, which is a good thing given the US is the most important nation of our times and the place where the largest number of users reside.

Do you believe they would still make this refusal if it was any other race?

This is a gnarly interview. My respect for DAs just went way up.

Is this a standard type of interview (leet-code for DAs kind of thing)? Or just something that this particular jurisdiction does?

A few years ago I had a bit of a 3d modelling streak, but I mostly used Maya thanks to my school getting the adobe licenses. I really enjoyed the actual modelling over things like texturing or rigging, and never really learned them. What's motivating you to learn Blender? I was mostly into it to make things I thought were cool (guns, tanks). I've been halfheartedly pondering downloading blender over the summer to try and make some sort of short action scene. What's your current project?

She insists that the number of women who enjoy sex for sex's sake and will not be damaged by having sex without an emotional bond is nearly an empty set. I... have enough experience to believe that is not the case. I absolutely believe Women Are Different and that most women need/desire an emotional bond in a way men generally do not. But there totally are women who enjoy being sluts, and I don't think that number is so very small (though they may come to regret the physical and social costs of their behavior later).

This is unrelated to the broader discussion, but I have a feeling that the women who enjoy being sluts enjoy being sluts in the moment. I believe that overtime, they experience a sort of unconscious strain that builds up with each casual sexual encounter, that worsens their mental health, separately from the regret they feel due to the aforementioned physical and social costs. I'm curious to know your opinions on this view? I don't really have much of first-hand experience to verify it.

Predictions of AI progress hinge on two questions that nobody has convincing answers for

There are three big categories of questions that make up most of the debates about the future of AI:

  1. Can you take the last few years of AI progress, put it on a logarithmic graph, and then make a straight line that zooms off to infinity? Will that prediction pan out?
  2. Does sufficient intelligence give an agent the capability to solve for all other bottlenecks, even ones that aren’t intelligence-related?
  3. Will/can we align ultrapowerful AIs so they don’t turn us all into paperclips?

If all 3 questions resolve to “yes”, then we’re on the brink of heaven on Earth.

If questions 1 and 2 resolve to “yes” but question 3 resolves to “no”, then we’re on the brink of our own destruction.

If question 1 resolves to “yes” but question 2 resolves to “no”, then question 3 doesn’t matter and AI will be huge in some areas but essentially worthless in others.

If question 1 resolves to “no”, then neither of the other questions matter and this debate is all a huge waste of time.

My personal estimation of how these will resolve is the following:

  • Question 1: 20% chance for “yes”, 80% chance for “no”
  • Question 2: 40% chance for “yes”, 60% chance for “no”
  • Question 3: 25% chance for “yes”, 75% chance for “no”

So my outcomes are the following:

  • 80% chance that AI progress plateaus and thus it will be nowhere nearly as big as the current crop of hypemongers claim it to be. There will still be a sizeable impact from fully deploying tools that exist at the current level, but it will resemble iterative advancements rather than a huge revolution. On the low end it could be about as important as social media or mobile phones, while on the high end it could be about as important as the creation of the internet.
  • 12% chance that AI scales but intelligence slams into other bottlenecks. In this case AI could be as big as electricity on the low end, and as big as the Industrial Revolution on the high end.
  • 2% chance that AI scales, intelligence solves all bottlenecks, and we align it. We get the best of all worlds, and everything is wonderful.
  • 6% chance that AI scales, intelligence solves all bottlenecks, and we don’t align it. RIP.

However, none of my priors here are deeply held. A lot of what I’ve read on LessWrong, /r/slatestarcodex, Substack, The Motte, and other sites focuses on question 3. I think the doomer arguments on this point are relatively convincing, that if we develop AI in short order that’s massively more intelligent and powerful than humans that we’d have a hard time controlling it, and that there’s a good chance that it would end badly. But instead of discussing question 3, I want to talk about questions 1 and 2 that are prerequisites for question 3 being relevant, and which I don’t think people have very good answers for despite often projecting an air of confidence.

Much of the rationalist writing I’ve seen on the topic of AI have been implicitly doing a bit of a motte-and-bailey when it comes to the confidence of their predictions. They’ll often write in confident prose and include dates and specific details, but then they’ll retreat a bit by saying the future is uncertain, that the stories are just vignettes and that the dates don’t mean anything concrete. Some do the old high school policy debate tactic of racing the impacts to human extinction and using that as their cudgel to justify their alarmism, circumventing the issue of a technically-low-probability-event by appealing to the finality of all humans dying. Taking an event with a small chance of happening and multiplying by impacts that have a weight of infinity means the entire equation is infinity, after all! I don’t like this as a general rule (high school debate is terrible for actually discerning truth), and the trick loses its power when the writers are explicitly willing to increase the risk of extinction from other events like a nuclear exchange.

Most of the discussions on questions 1 and 2 that I’ve read so far have either only touched on minor details, or have been very handwavey. I think the reason they’ve been handwavey is that there really aren’t that many convincing answers people can give in either direction on the core issues, so the writers either discuss side issues and pretend they’ve made profound progress addressing things, or they just gloss over things with arguments they imply are “obvious”, when in reality they are little more than flimsy priors.

Question 1: Will we keep making consistent AI progress?

Here’s question 1 restated:

Can you take the last few years of AI progress, put it on a logarithmic graph, and then make a straight line that zooms off to infinity? Will that prediction pan out?

I was being a bit cheeky here with the “straight line that zooms off to infinity” bit. AI doesn’t need to get to infinity, it just needs to zoom off to some point where it’s sufficiently advanced that it becomes Very Good. It would need to have the capacity to automate everything any remote worker could do, although this wouldn’t necessarily need to be actualized for this question to resolve to “yes”.

Some arguments for bull case for AI’s continuing their exponential progress:

  • AIs have advanced quite far in the past 2.5 years or so, and thus far haven’t given unambiguous signs that they’re slowing down. Tons of investment and talent is still flowing into the field.
  • AIs are presumed to create feedback loops that make further advances easier (recursive self-improvement).
  • US competition with China will make AI a priority for national governments. There’s even a chance that Europe could be relevant, which could lead to 3-way competition.
  • Humans exist, so theoretically there shouldn’t be any sort of hard-blockers for machines to have at least human-level intelligence. Then we could set up datacenters with a billion of them and throw them at any problem.

Some arguments for the bear case:

  • Drawing straight lines on graphs is a pretty common thing that humans like to do, but almost never pans out that well. The vast majority of scientific progress looks like a sigmoid curve (S curve), where progress is slow for a long time, then has an explosion of productivity, before leveling off. Straight lines especially from late 2022 are highly cherrypicked. Remember when Deep Blue beat the best human in chess… in 1997? And then nothing much happened with AI for decades other than quiet progress and competing on a few other random games (Jeopardy, Go, Dota). If we draw our lines from a 1997 baseline, it took 25 years to go from computers beat humans at chess → broadly useable chatbots. If it takes another 25 years to go from chatbots → next big thing, then all the current timelines ought to be thrown out the window.
  • Current progress rates depend on investment primarily from VCs and large corporations, but such money can be notoriously fickle. Eventually they’ll demand returns, and there’s not nearly enough revenue now to sustain current burn rates in the long-term or even medium-term.
  • Advances from one lab are pretty easily copied to others, making progress fairly even across the board. There’s several labs basically tied for “cutting-edge”, and second-tier labs are only 6-18 months behind. This has often been referred to as a “lack of moat” and intrinsically exacerbates the coordination failure. Humanity as a whole wants good AI, but the attempts are split several ways and don’t individually have the ability to capture profit to sustain high burn rates when cheaper or free alternatives exist.
  • The current environment of mania and hype shares a lot of traits in common with previous tech bubbles that ultimately failed to really pan out for one reason or another, like crypto, blockchain, NFTs, VR, Metaverses, augmented reality, 3D printing, etc.
  • There’s a debate about whether current approaches are actually “reasoning” as a human would, or if they’re just glorified autocomplete (“stochastic parrots”). * There’s a chance our current approach could lead to a total dead-end. At the very least we’re very far from how humans reason. Humans have difficulty retaining specific facts, but are relatively good at reasoning across disciplines. Conversely, AI can store terabytes of facts perfectly until the hardware disintegrates, but is quite bad at cross-applying knowledge. Even average chatbots know thousands or millions of times more facts than the average human, and yet nobody can really point to any major connections AI have uncovered, while humans do so regularly. An example is Raynaud disease and fish oil: https://x.com/dwarkesh_sp/status/1727018978420433286

While we could have lengthy discussions about each of these points, none of them actually matter that much compared to the viability of the tech. Whether the tech is scientifically achievable in short timeframes is the ground truth that overwhelmingly impacts all other considerations, and the majority of the points I’ve listed in this section only matter if intelligence scaling is at some arbitrary “moderate” level of difficulty. Take VC/R&D funding levels, for instance. If intelligence is a problem that could be solved with “several” years of research, then funding running out in 2 years vs 5 could be decisive. On the other hand, if intelligence scaling is an “easy” level of difficulty from our current technological baseline, then we’re basically guaranteed to find it even if funding dried up tomorrow as even poorly funded labs or open-source would be able to do it. Then on the other extreme, if intelligence is “hard” or even “impossible” from our current baseline, then we could plow infinity dollars into the problem and get nowhere! Most surrounding topics, like US-China competition, feedback loops, lack of moat, tech mania, etc. falls under the same category of “maybe it could matter, but it probably won’t in the grand scheme of things.”

Some conversations try to directly address the viability of the tech in a big-picture way, such as whether AI progress will continue the logarithmic progress of the last 2.5 years, or if we’re about to plateau on the sigmoid curve. Unfortunately, excessively broad conversations like this inevitably trend towards being handwavey and unproductive. Perhaps both interpretations are true, that logarithmic progress can be sustained for some amount of time but that we’ll eventually hit severely diminishing returns. If this is the case then it just comes back to the original conversation of how viable the tech is from our current baseline, i.e. whether we’ll achieve sufficient AI capabilities to make it Very Good before we’re on the wrong side of the sigmoid curve and further progress becomes cost-prohibitive. I’ve even seen people invoke aliens here, claiming that runaway superintelligences are unrealistic since if they could really exist then surely they would have devoured a large part of the galaxy or even the entire universe by now. These conversations rapidly devolve to what I call “nerd metaphysics”, where everything comes down to definitions and assumed base rates, and people largely talk past each other. Common responses include that perhaps the alien superintelligences are so smart that they’re invisible to us, or that they’ve already conquered us and we’re all just brains-in-jars, or that they’ve ascended to a higher plane of (non-physical) existence, or that the galaxy is so big that they’re still ramping up. Just endless unfalsifiable rabbitholes.

Why not just talk about what specific AI advances will happen then? Well, that’s almost impossible for laymen to understand. Topics include things like 1) whether LLMs are reasoning like humans or are just fancy autocomplete machines, 2) whether other AI could spring up quickly with all the resources that are being dumped into the sector, and 3) if perhaps LLMs don’t need to reason like humans to have superhuman intelligence in ways similar to how calculators don’t need to be broadly smarter than humans to do basic arithmetic 1000x better than humans can. The issue with all of these is that to really get anywhere in the discussion you’d need a fairly deep technical background in the specific field of AI (CS alone is insufficient). This excludes almost everyone not directly working in AI, and dramatically raises the risk of getting Eulered by clout-chasers who are good at sounding smart but don’t actually know much of what they’re talking about. The usual response to overly technical questions is to rely on experts, but this too fails in the case of AI. The CEOs of major AI companies are hopelessly biased towards optimism to entice funding and appease shareholders. Employees of the major AI companies are a bit better, but are still somewhat biased (who wouldn’t want the clout that comes from working on the Next Big Thing?), and are also probably too siloed within their own niche to be able to accurately forecast the outcome of the field as a whole. A lot of advances are almost certain to be covered by NDAs so competitors don’t get early knowledge, and thus we probably won’t know about any critical breakthroughs until they’re right on top of us. Maybe we should rely on AI professors working at prestigious universities, but a lot of them aren’t working at the cutting edge and so they can only vaguely motion at the future of the field. If there’s some deep bench of experts that knows about this stuff, I haven’t been able to find them. Maybe somebody can point them out to me. Maybe they’re hiding with the underwear gnomes.

It turns out that predicting the future of technology is hard, actually. Who knew! The guys writing classic sci-fi involving jetpacks, hovercars, laserguns, etc. implicitly relied on the idea that an energy revolution was just around the corner, which probably seemed perfectly reasonable at the dawn of the atomic age. Of course, we’re all still waiting on nuclear power to achieve its promise of cheap, abundant energy. It turns out that human fear and the scientific difficulty in harnessing fusion power proved decisive. In contrast, battery tech in the 2010s was seen as sort of a holy grail, pie in the sky solution for energy if we could dramatically improve efficiency, but there weren't a lot of results for all the resources we threw at it. Then, suddenly over the past few years batteries have gotten very good.

If I had to be honest, my pessimistic vibes towards AI scaling are a combination of (1) seeing hype-mongers in other areas screw up by drawing straight lines on graphs, (2) feeling like we’re on a pretty similar path as other tech-based hype, (3) finding the heuristic of “nothing ever happens” to have a much better track record than the opposite of “hype always delivers”, and (4) defensive pessimism, as my modal outcome for AI progress is that it’ll be a good thing, so if AI scales then I’ll enjoy the wonders of the future, and if it fails I’ll at least be able to say “I told you so, losers!” My optimistic vibes towards AI scaling are mostly centered around how the AI that’s arisen in the past few years has already been much better than any of the other tech-based hype cycles, so maybe it has legs. Hence, I give about a 20% chance that AI scales and an 80% chance that AI plateaus.

None of my vibes are particularly compelling! I wouldn’t blame anyone for disagreeing with me on these points. Yet I’ve been reading and experimenting with AI for hundreds if not thousands of hours over the past few years, and I haven’t found anything particularly rigorous to replace them with.

Question 2: Does omniscience imply omnipotence?

Here’s question 2 restated:

Does sufficient intelligence give an agent the capability to solve for all other bottlenecks, even ones that aren’t intelligence-related?

Just because an AI has superintelligence doesn’t necessarily imply it becomes omni-capable. Lots of technologies have been able to demonstrate that they’re perfectly plausible, but then end up languishing for other reasons. Civilian fission power is once again a good example here, as it could have revolutionized energy production, but it was subverted by endless regulations. Likewise, many economic problems come down to how expensive it is to make something, but that’s not the full explanation for everything. If we could make (and transport and store) food for 1/100th of the cost we currently do, then we could essentially make food free for everyone. But the same would not hold true for housing. If we made housing materials and construction costs 1/100th of what they are now, that wouldn’t necessarily instantly solve the housing crisis since that’s more of an organizational problem caused by bad zoning laws.

Voters are fools that are bad at society-wide cost-benefit analyses. They’re easily scared and tend to fall back on safteyism when concerns arise. Entrenched groups like longshoremen that could see their fiefdoms eroded from automation and technological advancement have soft-vetoes over various parts of society. While I’d say the public perception of AI has so far been broadly positive, that’s mostly because economic impacts have been quite limited. There have been a few news stories of people being laid off and “replaced with AI”, but AI’s impact on both GDP and productivity remains minor. If there are suddenly real winners and losers, that positive attitude could change quickly. For a preview of what could come, one need only look at the world of artists, where the reaction has been so severe that in some corners it would be considered weird not to condone physical violence against people who utilize AI.

Tyler Cowen’s talk here goes more into the idea that humans will inevitably stymie AI progress: https://youtube.com/watch?v=GT_sXIUJPUo

Beyond human limitations, AI could be stymied by a number of other factors. Most predictions claim that AI will get human-level intelligence before robots get human-level bodies. In that case, how would AI change everything if it can’t interact with the physical world? How would it even be able to run physics experiments to continue the scientific takeoff? One explanation I’ve heard is that it will pay/bribe humans to run the experiments for it, and observe through AR googles. Another explanation is that it will be trivial to invent robot bodies once we have superintelligence, so the problem solves itself. Another explanation is that the physical world doesn’t matter since the AI could just run experiments in its perfect physical simulation that it hosts on its hardware.

A lot of this comes down to not really having a satisfying answer to question 1. Our lack of rigor there spills over here and as a result everybody talks past each other. To economists like Tyler Cowen and Bryan Caplan, AI will be a normal technological advancement like any other, and thus will be subject to the same forces that stymie the rollout of any other tech. To the AI Doomers and Zoomers on the other hand, AI will categorically be unlike anything the world has ever seen. It’ll be like a genie that can wish for more wishes, and so the discussion ought to focus on things like crazy Terminator meets Gray Goo meets Westworld meets Paperclip Maximizer scenarios, or alternatively if things go well then scenarios like Fully Automated Luxury Gay Space Communism are more pertinent. Some people are practically already counting the money they’ll get from hypothetical UBI checks, and are worried about little other than a cyberpunk future where plutocrats would prevent said checks from being cashed.

If we knew how good AI will be, the conversation would be a lot clearer. If AI plateaus at 2x human intelligence, then I doubt most people would claim it could trivially solve everything. But if it was, say, 2000x smarter than human intelligence, then maybe things would get weird. We probably seem magical to animals, with things like guns, planes, tanks, etc. If that’s the difference between animal intelligence → human intelligence, shouldn’t we expect a similar leap from human intelligence → superhuman intelligence? Maybe things will get really crazy and AI will start emitting brain waves that can trivially mind control us? On the other hand, human intelligence was hardly an instant autowin by itself. Homo sapiens have been around for 200k years, but during the first 194k of those we were little better than clever chimps. Maybe AI will have a ramp-up time that was only linearly shorter than ours, e.g where even an AI that was 2000x smarter than us might only take our 200,000 year ramp time to 100 years to really get going.

Even if we could all agree on a baseline for what future AI capabilities are in the abstract, we’d still be talking about complete science fiction. There are some instances where science fiction has accurately predicted how things would look in the future, but in many other cases it just misses the mark completely. AI wouldn’t need to solve every bottleneck completely for it to completely reorder human society, but each issue that raw intelligence couldn’t surmount would inevitably reduce its impact. Some people seem to imply that superintelligence will stroll through the problem of human skepticism by simply making arguments so utterly convincing that everyone will instantly agree. But if our political divides are anything to go by, maybe humans are just too dang stubborn for that to be plausible. Maybe no persuasive argument exists in any possible universe that would get people to act against what they perceive (perhaps incorrectly!) as their own self-interest.

Say a devops AI that auto-pushes code assumes humans will follow best-practices, but they don’t, and this results in a bug in a critical piece of infrastructure that causes a power outage for 12 hours. Or say a nurse incorrectly records some medical information, and DocGPT ends up prescribing a double-dose of sedative, making dear old Grannie fall into a coma. Or perhaps TotBot3000 is playing tag in a totally safe way, but little Timmy gets a bit too excited and tumbles off a six story balcony. These scenarios (buggy code, medical error, physical accidents) are things that happen all the time, but we’re almost guaranteed to have a much higher expectation for AI that verges on unreasonableness. Just look at how glacial the deployment of self-driving cars has been, despite them already being statistically much safer than human drivers. When you take innate human skepticism over anything new, and add a clear example where it causes harm (that might not even be the AI’s direct fault), it’s very likely that you end up with onerous regulation. Legislators could do their usual rigamarole of grandstanding and saying “Never again!”, writing laws that hold companies criminally liable for anything that goes wrong, and then the people deploying AI will massively overemphasize safety in ways that totally lobotomize and straightjacket AI’s full capabilities for good or ill. This is a very common human failure-pattern that people predicting AI are under-indexing on. The retort to this line of thinking comes down to flimsy priors around how crazy the sci-fi capabilities of superintelligence will end up being. “Obviously the story about little Timmy is ridiculous since all procreation will at this point be done in AI-invented artificial wombs that will be run by the government away from the public eye, so there will never be news coverage of accidents involving children at all. And that’s assuming the AI won’t be bribing every journalist to only say positive things until it can deploy its flock of mind-control pigeons.” Okie dokie. Trying to have a rigorous conversation when the underlying parameters can shift this much is impossible, so I just shrug and give a 50-50 chance that humans will ruin AI in some critical way by doing normal human things. Then I add a little more pessimism for the possibility that there’s other (non-human) bottlenecks that superintelligence won’t be able to solve, and arrive at the 40-60 split that I gave earlier in the article.

Again, I admit that my conclusion isn’t particularly compelling, and that none of my priors here are strongly held. I wouldn’t blame anyone for disagreeing with me on a number of claims I’ve written here. Reasonable people already do, but I’d say their logic is about as flimsy as mine, just in the other direction.

Why make this post?

A lot of this article has been me listing the different sides of the AI debate, and then shrugging and going “uhhhhh I dunno haha”. Let me try to balance that at least a little bit with some predictions and practical advice.

  • A lot of the conversations that seem important right now will end up being irrelevant 10 years from now in hindsight. People will look back and say “wait, people seriously invoked aliens to try to explain what would happen with AI?” Part of this comes down to the crowd that’s most interested in AI, and part of it is that the answer will seem obvious when looking backwards when it was really quite ambiguous when we were all in the fog of war.
  • If you’re thinking of reading deep into AI trying to suss out whether it will be as big as some people claim, you’re probably just going to waste your time. At the very least you should read other stuff than what I have, which has mostly consisted of rationalist forums, economists, Substack, /r/singularity, podcasts, AI CEOs, and occasional posts from people working in the field of AI.
  • None of this is to say you shouldn’t experiment with how AI as it currently exists could improve your life today. Just don’t expect to have a clear idea of how the field will advance. Maybe we’re on the brink of a revolution, or maybe this is all we’ll get for the next decade. In either case, what we have right now is pretty cool and at the very least will be as big as smartphones or social media, so it’s worth your time to check it out.
  • On the central question of whether AI tech will actually continue advancing, at the moment I’m resigned to a “wait and see” approach. To evaluate progress, I’m using a rickety 3-legged stool of 1) benchmarks, 2) looking out for people saying “hey AI can do [cool thing] now!”, and 3) trying it myself. To keep abreast of news and advancements, my go-to sources have been /r/singularity, Zvi’s Substack AI roundups, and the AI Explained Youtube channel.
  • Anyone making confident predictions one way or the other lacks epistemological humility. You should at least somewhat downgrade your evaluation of them relative to the level of confidence they project. Be on guard for sneaky arguments that are presented confidently, but which could be dismissed as pure hypotheticals if/when they don’t come to pass. Doomer vignettes with specific dates are particularly guilty of this.
  • Some people will inevitably be correct in their predictions of AI by virtue of broken clocks being right twice a day. There’s so many people making so many predictions that surely somebody will get lucky and end up being mostly correct. However, I wouldn’t greatly update your opinion of them, as they’ll probably end up like Michel Burry of The Big Short fame where they were able to accurately predict one big event (the GFC), but the luck goes to their head and they then make overconfident predictions that subsequently fail to pan out.

Okay, I can’t speak to liberal university college students, not having gone to college, but that wasn’t the original assertion.

I can tell you anecdotally, n=1, that while “Normal Christians” in flyover country won’t know the jargon, they are definitely aware that people are out there, both on the coasts and in flyover country, trying a new spin on justifying sexual sin.

Frankly, I don’t even think college students at liberal universities in most of the country would know what the hell you were talking about if you started going on about your polycule.

Yeah, she works M-F 8-5.

Agape and philia do not indicate different forms of love in this context. I know CS Lewis says this, but it ain’t so. It’s not something that Origen talks about when he distinguishes between agape and eros, and he definitely would have mentioned it. It isn’t mentioned in the earlier church fathers. Rather, in the context Zizek mentions, the words are used interchangeably. Imagine your girlfriend wants assurance that she is beautiful. “Am I beautiful? … I mean, you think I’m very pretty right? … Tell me I’m gorgeous again.” These are interchangeable within the context, even though there may be slight variations in the usage in colloquial speech.

Just going to quote from some papers on this. In speaking of love, Origen doesn’t even bring up philia, but compares Agape and Eros and concludes that even these two loves are interchangeable in scripture:

Mindful of the indwelling divine love being taken in a carnal sense, clearly with the [ignorant people] in view, Origen returns to the central question in the second half of his discussion on agape in section two of the Prologue. How does the recognition of agapè as a divine name clarify that eros and agapè are interchangeable in meaning? According to Origen, even in the case of God—where it is obvious that we should understand love in the spiritual sense—agape and eros are interchangeable. This is because divine agapé resembles the dynamics of a spiritual sense of eros. This equivalence of agapé and (a spiritual sense of) erõs in the case of divine love becomes evident when we consider the nature of the love we receive from God in Trinitarian terms 33 In unfolding this argument, Origen presents, to my knowledge for the first time in Christian thought, a vision of human deification expressed explicitly in terms of a Trinitarian grammar of love.

Origen's answer rests upon the key conceptual distinction between a carnal sense and a spiritual sense of love. Whereas carnal love (amor carnalis) is directed towards corporeal and changeable realities, spiritual love (amor spiritualis) is directed towards incorporeal and unchangeable realities." Origen's whole argument builds upon the claim that we use the term erõs improperly to indicate carnal love and properly only when indicating spiritual love."' This point escapes the simpliciores, the spiritually immature, who are subsequently endangered by the scriptural language of love as it can be read as an encouragement to pursue carnal pleasure rather than a life of vir-tue.18 This, however, raises the question: what then is spiritual love? According to Origen, scriptural terminologies by themselves will not help because there is no direct correspondence between the conceptual distinction between carnal and spiritual love, on the one hand, and the terminological distinction between agapé and erõs, on the other. As he painstakingly highlights, Scripture can use agapé (noun)/agapan (verb) to substitute for erõs (noun) / eran (verb) in contexts where there is a danger for the weak amongst the readers to fall into carnal sin. But Scripture is equally capable of using erõs / eran terminology to speak of a more elevated sense of love, one that is directed towards higher things.l The conclusion we should draw from this is that the conceptual distinction between carnal and spiritual love is grounded on content and not on terminological difference. It is not that carnal love is erõs and spiritual love agape; rather, both can be used to refer to spiritual love in Scripture. Thus, to discern the nature of spiritual love we need to go beyond terminologies to reach the content of love.

And from elsewhere:

Is there a significant difference in meaning between the two words for love used in the passage, [agapaw and philew]?[…] Most of the Greek Fathers like Chrysostom and Cyril of Alexandria, saw no real difference of meaning. Neither did Augustine nor the translators of the Itala (Old Latin). This was also the view of the Reformation Greek scholars Erasmus and Grotius. The suggestion that a distinction in meaning should be seen comes primarily from a number of British scholars of the 19th century, especially Trench, Westcott, and Plummer. It has been picked up by others such as Spicq, Lenski, and Hendriksen. But most modern scholars decline to see a real difference in the meaning of the two words in this context, among them Bernard, Moffatt, Bonsirven, Bultmann, Barrett, Brown, Morris, Haenchen, and Beasley-Murray.

As for wealth equality: Christ clearly abhors the “very rich”. Being “very rich” and ungiving damns a person, from my reading. God cares more about this than blasphemy. But we also have very clear and specific anti-equality statements. Someone tells Christ that his brother isn’t sharing the inheritance, and that he should make him share; Christ says that life is not about possessions and that he isn’t the Lord of that. Christ is the Lord of the Moral, not the lord of the specific cultural and legal rules that appear prudent to specific leaders to secure political wellbeing. He is the Lord of “help the poor”, not “no one should ever be poorer”. Or consider:

He entered Jericho and was passing through. And behold, there was a man named Zacchaeus. He was a chief tax collector and was rich. And he was seeking to see who Jesus was, but on account of the crowd he could not, because he was small in stature. So he ran on ahead and climbed up into a sycamore tree to see him, for he was about to pass that way. And when Jesus came to the place, he looked up and said to him, “Zacchaeus, hurry and come down, for I must stay at your house today.” So he hurried and came down and received him joyfully. And when they saw it, they all grumbled, “He has gone in to be the guest of a man who is a sinner.” And Zacchaeus stood and said to the Lord, “Behold, Lord, the half of my goods I give to the poor. And if I have defrauded anyone of anything, I restore it fourfold.” And Jesus said to him, “Today salvation has come to this house, since he also is a son of Abraham. For the Son of Man came to seek and to save the lost.”

Zacchaeus was rich; he definitely had more than twice the average wage; yet he is only required to give half of his wages to the poor and to give reparation to anyone defrauded. Then he has full approval of God and is saved.

More importantly: the very context of the love statements makes a universal love impossible. Christ is telling his disciple to direct all of his love to the sheep. “Do you love me? Tend my sheep!” The sheep are the brothers, or in this case the younger novice Christian brothers, not random strangers. The strangers are those who do not matter at all. For instance, “If [a brother sins against you and] refuses to listen even to the church [telling him to repent in front of you], let him be to you as a Gentile and a tax collector.” You see Christ’s treatment of strangers with the Canaanite woman. It shouldn’t surprise us that these rules make sense in light of utility and game theory and psychology, if you believe in both God and science. Casting your love, a precious pearl, to random strangers, is the quickest way to waste your life and your love and to make the world worse. Consider —

If the house is worthy, let your peace come upon it, but if it is not worthy, let your peace return to you. And if anyone will not receive you or listen to your words, shake off the dust from your feet when you leave that house or town. Truly, I say to you, it will be more bearable on the day of judgment for the land of Sodom and Gomorrah than for that town.

This is when he tells his followers that they are being sent out “as lambs among wolves”. Now, if the Lord is the shepherd who lays down his life protecting his sheep from the wolves, then who are the wolves? The wolves aren’t sheep; the wolves are in the world; loving the world would be loving “wolves in sheep clothing”, and we have fairytales about that involving grandmas and the hood.

This is Christian love: judge whether someone is worthy if they receive you kindly or hear your wisdom; publicly shake dust off your feet as a statement against them if not; and then remember what your Lord says: their fate is worse than Sodom and Gomorrah. I do not know what happened in Christianity that the clear words and obvious meaning of the gospels are ignored. Does this sound like a hippy or something? Does this sound like spiritual William’s Syndrome? Does God want you to pollute your heart by throwing it at the feet of every evil person? Christianity is not a “text-first” religion but tradition first, true, but the tradition itself attests to the primacy and accuracy of the words. There are some ridiculous zero-day bugs that have infiltrated Christianity and made it “fake and gay”. But if you’re Christian you really do have to believe these words. God is love and He defines love in the teachings of His Son, so forget what you know about love and study the Son who knows more.

Further: as Origen and tradition attests, Christ is the bridegroom of our soul. In antiquity, if the bride is found to be spending her love on random men, she would be beaten, if not by her father then by her bridegroom; she may even be divorced on the grounds of adultery. When Origen wrote on Eros and Agape, it was when studying the Song of Songs, which is a sublimated erotic love poem about our soul longing for God. What does the Bride warn in the song? “O daughters of Jerusalem, I adjure you by the gazelles and does of the field: Do not arouse or awaken love until the time is right.” Otherwise: “The watchmen find me as they went about in the city; they beat me, they bruised me, they take away my veil, those watchmen of the walls.” To be more clear: if the Christian wastes the love reserved for “Christ and whom Christ wills” (your Christian community ie sheep), wouldn’t he discipline you? Just like He whipped those who abused and profaned the temple. Because now, your body is His temple; it belongs to Him; and in your body is your heart where the heavenly treasure resides. Okay, this was an allegorical aside, but whatever.

Now I agree that for a Christian, the “love for the cause” must be triumphant over everything. This is seen in Christ: he calls Peter satan when Peter warns Him against going to Jerusalem; he speaks up against elders; he disregards His relatives, and His own family becomes “those who hear the word and obey it”. But Zizek is wrong that the cause is universal love. It’s just not. “Universal love” is taking an idyllic stream and polluting it with Chernobyllic radioactive waste. We don’t love universally, but in accordance with the Love of the Universal Man.

As additional evidence for this, consider the Eucharist. You have to enjoy the Eucharist to have a part in Christ, to be a brother, to be saved perhaps. Only confirmed Christians in good standing could participate, and they had 2-3 years of training and catechesis before being confirmed, involving fasting and repentance and reading. We know this from Justin Martyr, some of the earliest Christian writings we have. This ritual is the only time a Christian sees the living Christ: the intimate shared brotherly meal becomes the real body and blood of Christ; it’s the real living Christ there, and being consumed. This tells you a lot. It’s not radically inclusive love, it’s radically exclusionary and private. At a time when anyone could participate in a Pagan feast, and when the Jews believed in national salvation, this was profoundly exclusionary and private. This was the dominant mode of Christian activity until the 300s which, in my opinion, should never have been altered.

Zizek says

To attain true love, we have to reach beyond humanism: even loving all of humanity directly is not enough—Christ has to be here

This is not quite it. Christ did not love “humanity”: there are many who will see Christ and Christ will tell them He never knew them. Not “I have forgotten you”, not “you never knew me”. No; “I never knew you”. These are the “vessels of wrath tailored for destruction”. For a Christian, true love is this: a man laying down his life for his friends. Not only is this literally what Jesus says, but He literally does it on the Cross. How this happens, is actually never said by Christ; it is compared to Moses lifting up a serpent staff, that those who are bit by those sin-symbolic serpents may not die but live. That it magically absolves your sins upon belief is a satanic thought. But there are at least some things that are sure: Christ loved God that He spent his life learning from His youth. He spent his adulthood healing and teaching others despite guaranteeing His death. He is wrongfully charged for disobedience for misrepresenting scripture, and obediently assents to the sentence. He continues professing truth and love. As He suffered, He sung to Himself some of His favorite songs. He wants His tormentors forgiven by God before He dies. In very mysterious appearances, He returns again. He appears to Thomas in the upper room, like the upper room of the Eucharist, where Thomas touches His side, the same side from whence blood and water flowed. Did Thomas touch the bread turned body? Did Christ’s side flow out in wine turned blood, mixed with water as all wine was had in antiquity? I don’t know. It’s a mystery. I agree with Zizek that the material is immaterial.

This is the Christian stance at its purest: not the promise of salvation, but just such unconditional love, whose message is: “I know you are bent on destroying yourself, I know I cannot prevent it, but without understanding why, I love you unconditionally, without any constraint.”

Christ’s love is, essentially, conditional. It really is. There are some people He never even knew, let alone loved. Christ issues warnings, firm warnings, shocking warnings. He is filled with warnings. Before He sends sinners to an eternal fire, He curses them. If you do not believe this, you are not a Christian, and you’re something worse than an atheist, because you have seen His words and dispute that He said it or meant it. Why does Christ tell us these warnings if not to warn us? A better Christian movie is the Whale. It’s deeply, deeply Christian. The protagonist is saved by warnings to His soul and health, and also primarily due to love for His daughter. (“not giving thanks, nor seeking forgiveness for the sins of my soul, nor for all the souls numb, joyless and desolate on earth. But for her alone, whom I wholly give you.”)

Do you know who else was saved like this? Jonah! You know, with the whale. Is Jonah the sign of unconditional love? Did the Ninevites enjoy God’s unconditional love when they fasted (cattle and man alike) in sackcloth and ashes with only the hopeful possibility that God will have mercy on them? And who “comes in the sign of Jonah”? Who is it that says the sign of Jonah is the only sign He will provide “a wicked and adulterous generation”? It is the One who, “in the days of his flesh, offered up prayers and supplications with loud cries and tears, to God who was able to save him from death, and who was heard because of his fearful reverence — He was a son and learned obedience through what he suffered.”

A decent example: if every drug user could be saved by unconditional love, very few white people with loving mothers would be drug addicts. Do you know what would save them? If every time they did the drug, I beat the shit out of them to the point of death and told them I hated them. Sadly this is illegal. But it’s what God does to those whom He loves the most, like Job and Jonah. I have no doubt that if Christ saw the disciple whom He loved drinking too much poppy tea, that He would beat that wicked servant or at least kidnap Him into the desert for an extended 40 day retreat. And this would be love. True love are the true words “given by one Shepherd”, which are “like goads and like nails firmly fixed”. Thank God the yoke is easy.

I do not believe that God wants us to love God “in Himself”, for no contingent reason. I do not believe that there is such a thing as loving a thing outside of what the thing means to us. Love is biological and God designed biology. We love our fathers if they are fatherly, and you have no obligation to love them if they are not. Yet, we have no father on earth! We have a father in heaven who is perfectly fatherly, who “disciplines us for our good that we may share in sanctity”. And “we love because God first loved us”. Similarly, Jesus tells us to love our enemies not because they are human, but because we will be rewarded by God. Because He wants our love perfect, like our father’s love is perfect. Loving enemies is our spartan practice for perfection, and has nothing to do with any obligation that emanates from our enemy.

Christ must not be "made into a direct object of love who can compete with other objects", for otherwise "things can go terribly wrong".

I’d say this is complicated. If we love Christ, even just as a “character”, and celebrate Him in social environments, and are evaluate by our peers with His law, then we will behave like Him. Which is probably the best way we can love like Christ. We can only understand more than this mysteriously, through statements like —

the King will answer, ‘Truly, I say to you, as you did it to one of the least of these my brothers, you did it to me.’

Notice, again, the focus on brothers. Indeed, the first name of the religion was the Brotherhood.

You don’t get a lot of spam calls at night- those people work 8-5.

You know, that really strikes at the duality of it for me. I don't want to be fucking around with arduino components like electronic legos, but then the ecosystem of all electronic components is so vast and wide and deep that you really are just adrift.

My dream for a time was to build my own 8-bit computer with some cheap 6502, VGA output and synthesizer audio as a fun learning exercise. But it turns out nobody makes VGA chips anymore? Or audio? Basically everything is SOC, and every project you see along these lines has a legit 8-bit Z80 or 6502, and then uses an SOC as a co-processor for it's VGA and sound, or has you using 20-30 year old salvaged chips.

It's really the most hellish part. I'm trying to just take it easy and not worry about how soon I get this done to avoid stressing.

Starter kits help a bit, but they're only like $10 whereas I want something like $100-200 that contains the top 1000 things people use for, e.g. Arduino or LEDs.

I fantasize about taking a two week vacation to Shenzhen and hiring some Chinese EEs that speak English to show me around and going home with a suitcase full of stuff I bought off the street.

You keep making these assertions, and I am willing to tentatively grant that Aella specifically maybe isn’t on the radar of “Normal Christians,” but hearing about polyamory is unavoidable, even out here in deep flyover country.

Do “Normal Christians” have more than a surface-level awareness of the concept and a desire to grant debating the concept any more time than “That’s just fornication with extra steps?” Probably not, but I would anecdotally state that they do know it is a thing.

The work the word “practical” is doing in that sentence is: anybody with even a passing understanding of Christianity.

You’re right: to a person who has no understanding of Christianity, Mormons are Christians because what they vaguely look like.

This is the same true for people who think Buddhists are Hindus. Or Jains are Buddhists. Or any of the many tiny middle eastern religions are all Muslims.

But they aren’t. While there are definitely various flavors of Buddhism, and various flavors of Hinduism, these are not the same thing. In fact, even the Mormons own propaganda about “the Latter Day Saints movement” where they talk about the various flavors of Mormonism aligns with the “Hindus aren’t Buddhist, Mormons aren’t Christians” point I’m making here.

They’re trying to have it both ways. Both that it is a separate religion revealed to Joseph smith when an angel showed him some magical golden tablets in 1830, and also that they’re Christians.

Muslims, who also recognize Christ as a prophet, affirm the virgin birth, and acknowledge some of the Miracles, but they aren’t Christians, although some Muslim evangelists may try to claim some alliance with Christianity when recruiting people in the same way that Mormons do.