Okay, I will go first. WP:
According to the Canadian Broadcasting Corporation, he was the only Iranian to sit on the Shura, or guiding council, of Hezbollah. According to The Guardian, he was most likely a critical figure in coordinating Iran's relationship with Hezbollah in Lebanon and the Assad government of Syria.
Are you arguing that the main target was actually just a foot soldier, or a civilian, or that Hezbollah had not been used to prop up the Assad regime?
Given that eight times more people were killed in the Israeli airstrike than in the shooting, you don't even have to find a source claiming that one of the people shot was the main military liaison between the US and Israel. If the victims were in charge of procuring small arms from the US and the shooter had picked them for that reason, I would concede that this was purposeful violence.
(All of this is discounting that there is an obvious difference between the military leadership in autocratic countries and stable democracies. In autocratic countries, a powerful general is a coup risk, so you want someone with a close personal relationship to the leader, think Crusader Kings. In a stable democracy at peace, there is a functionally unlimited supply of loyal and competent military leaders. If Iran managed to blow up the top ten military leaders of the US, this would not hamper the effectiveness of the US military very much.)
I will go out on a limb and say that I do not consider the Israeli airstrike against an Iranian general in Damascus all that bad.
Attacking an embassy is both an act of war against the host country and the country running the embassy.
Killing a general, his staff, and civilian Iranian embassy employees alongside two Syrian civilians (a mother and her child) is not great, but it is pretty tame both in the context of the Syrian civil war and compared to what Israel considers acceptable civilian casualties when taking out Hamas leaders in Gaza.
There is also a point to be made that this probably was a causal factor in the collapse of the Assad regime which happened in the same year, ending (hopefully) a decades long civil war.
Now, if you show me that the two Israelis which were killed were instrumental in the Israeli military efforts, perhaps tasked with sourcing US weapons, and the attacker picked them for that reason, then I will grudgingly grant you that they would have been acceptable targets from Iran's point of view.
But based on what I heard, some dude just shot two random embassy employees because he was unhappy with Israel.
This. Our current understanding of quantum physics is ultimately called the standard model of particle physics. This theory basically got its finishing touches mid-1970. Since then, we have found a few missing pieces of the puzzle predicted by the SM, such as the top quark or the Higgs.
Besides the open questions which were apparent ca. 1975 (Can we unify the strong and electroweak interaction? How the fuck should gravity fit into all of that?), we have found a few more puzzles (e.g. so at least some neutrinos have mass, dark matter seems to be implied by astronomical observations).
These open problems have been attracting theoretical physicists -- and I have no reason to believe that the current top theoreticians are simply less smart than Einstein or Dirac. So far, we have not made enough progress that one could confidently predict a timeline. If we do not get an AI-powered singularity, it seems certainly possible that by 2125, the progress we will have made is that our best candidate from string theory will be somewhat less ruled out by the SM.
Fleming's original discovery could have been made by anyone, but actually synthesizing penicillin in useful quantities required (in our timeline) modern industrial chemistry. I think it could have been done 50-100 years earlier if alt-Fleming takes his discovery to the brewing industry (the hard part is growing fungus cleanly on a carbohydrate feedstock) rather than pharma, but not before that.
The pathway to discovery which we took involved noticing that mold was stopping the growth of cultivated bacteria. To take that pathway, you require the cultivation of individual bacteria colonies. This is not trivial, because in nature everything is full of all kinds of spores, and you basically need a theory of germs. Simply, the discovery "mold kills bacteria in a petri dish" requires "one can grow bacteria in a petri dish". Basically, it took Pasteur (ca 1859) to discover the latter fact, before him people generally thought that bacteria would form spontaneously. Fleming discovered his ruined cultures in 1928, and then it took another 15 years or so to really get the production up. I guess I half-agree with your assessment, in that I think there was probably 50 years worth of slack, but I don't think it was 100 years, and certainly not 2000 years as the OP suggested.
I think that in this particular instance, it is simply that the rules as written are bad. If you want to incentivize people avoiding single-use bags, then the following should work out fine:
If you don't require any single use bags, you will get a flat discount for your purchase.
If you don't have enough reusable bags, then we will happily sell you another reusable bag if you want to take advantage of the discount.
As we also want people to shop in bulk, we will increase the reusable-bag discount based on the sticker price (using a strictly monotonic function between sticker price and discounted price, e.g. 1% off per ten dollars sticker price, up to 10%).
The cashier would simply have a line item "all-reusable discount", and the computer would calculate the discount. Sure, you could still have discussions about what counts as a "reusable bag". Is any bag which was not provided by the store for free ok? What if someone buys a roll of garbage bags and then proceeds to use them to transport the groceries? But you would at least no longer be vulnerable to the exploit you describe.
Meanwhile in Germany, the thin plastic bags for loose fruits and vegetables are free, but every other bag will cost you. Nor will most supermarkets sell you shitty single-use plastic bags. Your options are to either spend half a Euro on a shitty paper bag which will probably fail if you fill it up, or pay a Euro (or two?) for a robust reusable plastic bag. I own perhaps eight of the latter and they can be reused for dozens of shopping trips easily.
Is it worth saving a child's life if the consequence is that it makes a murderer harder to convict? I don't think most people would have to think too hard about that one.
Yes. Worst case, the murderer walks free, but that baby is saved.
The expected value of murders a murderer will commit after being found innocent due to a procedural flaw is much lower than one plus the average number of murders he will commit if he is not caught at this chance.
Depends a lot on the aims for the date, imho. If the main goal is to have sex with her, then being the perfect and tough guy who is of course not whining about the cold and will in fact lend miss little princess his jacket if she should feel cold could be the winning strategy.
On the other hand, if you are looking for a long term relationship and don't want to keep the princess/servant dynamic in the long term, you might want to show a bit of imperfection and vulnerability just to see how she will react. Will she ignore your plight? Will she be willing to go into a clothing store with you and let you pay for a scarf? Will she gift you a scarf? Will she do something completely unexpected, like conning a receptionist? Will she dump you on the spot?
I think that agency is somewhat orthogonal to morality.
Low agency people (like myself, TBH) who do what all the others around them are doing are unlikely to stand out in either a good or bad way much. (They will still have a large overall impact which could be good or bad, though.)
Some high agency found EAs and try very hard to make the world a better place. Some high agency people try to rip off people to fund their underage sex islands. Some build successful companies producing dental drills. There is a much larger variety of ethical impact per person, but one can hardly say that they are all bad.
On one hand, it's worrisome if a chick is so blasé about lying—if she so casually lies to a hotel receptionist in a low stakes situation, what if she's similarly down to lie to me in a higher stakes situation?
For a lost item, the lying part is weighting much heavier than the stealing part.
Now don't get me wrong, I lie. Not for profit, and generally not to people close to me, but certainly to authorities to make interactions go smoothly. If I get into a traffic stop and I am asked if I take any medication, then I could be truthful and give them a list of drugs, and hope that they will eventually figure out that these drugs do not impair the ability to drive a car. Instead, I will simply lie to their face that I do not take any medication. But I generally do not seek out situations where I will lie.
Happily lying to a receptionist for shit and giggles and because you want a scarf is a whole other ballpark. Dark triad territory.
I think that a lot depends on the goals of the guy on the date. If he is looking to get laid, or for an accomplice in a bank robbery, her showing risk-taking behavior and a disregard for conventional morality is certainly increasing her value.
If he is looking for a long term relationship as a well-regarded member of society, that would indeed be a red flag. On the second date, people (I think) still bother to try to hide their flaws. If it does not occur that "I have a very loose morality around the subjects of property and (more importantly) truth" might be worth hiding, one might start to wonder what kind of flaws she might actually be hiding on top of that. Perhaps she is in a marriage she has not told you about, or works as a con artist or pickpocket.
Children benefit from stay-at-home moms; I did, anyway.
I believe you, but I would still argue that there are opportunity costs. A one-year-old requires a caretaker 24x7, and presumably might benefit from that caretaker being their mother. A ten-year-old requires much less adult supervision. Someone to cook dinner and make sure that they either attend or have called by then is certainly helpful, but 24x7 supervision would be actively harmful.
Now, if your model stay-at-home mom starts having kids age 18 and then has a child every other year for as long as nature will allow, I will grant you that she will have her hands full taking care of her kids for a significant fraction of her work life. But in most Western marriages, it is not like that. Instead, she will have two or three children, which will keep her occupied for a decade, but once her smallest child goes to school, she will have a lot of time on her hands for the better part of her work life.
I am not arguing that working 40h a week is the only valid model of how to spend your life, and if someone is happy playing video games or join some club or have an OnlyFans career or dedicate their life to gardening, who am I to tell them that they are wrong? Still, having opted not to have earned a degree seems somewhat likely to limit your options at self-actualization, and earning a degree remotely at age 40 is likely going to be harder.
And if your values differ from those of the broader culture, daycare is likely to drag your kids at least part way to that culture.
I think that this is unavoidable in general. I would advise to raise kids in a culture you are at least halfway comfortable with. Even with homeschooling and everything, you can not completely shield your child from the local culture. Sure, there are some who try, like some Muslim families trying to raise their daughters according to Sharia law in the middle of Western cities, but I think that their success is mixed at best.
Personally, I would not fret overly much about it. I was raised (mildly) Roman Catholic, and it did not stop me from seeing the light of Igtheism at 15 or so. While I am sure that there are some horror stories about some overachieving kindergarten teacher telling white kids to hate themselves, I think the median version of the SJ creed taught to kids is much less harmful. Like Santa Claus, blank-slatism is the sort of lie which is unlikely to harm the development of a kid much. They can still learn about the Ashkenazi intelligence hypothesis and HBD later.
There are (roughly) two kinds of religiously motivated murders.
One is the sacrifice, where you want to send your god a juicy piece of meat or some virgin pussy or kid as a bribe or tribute. Generally, the sacrifice is a mean to an end, the process is really a transaction between the one sponsoring the sacrifice and god. Sure, you might get extra virtue points for sacrificing your favorite daughter, but if she happens to have her period on the set date you can just sacrifice another daughter. Generally, you want your sacrifices to be pure and hale. Sacrificing a lame goat or a disobeyant child might be seen as an insult, after all.
The other type of murder is a punishment for a religious transgression, real or imagined, such as witchcraft, blasphemy, heresy. This is primarily a matter between the accused and the community, just like a secular crime.
This is well illustrated by the concept of the scapegoat. You start out with two goats. One stays pure and is sacrificed to god, the other gets the sins transferred to it and is then abandoned in the desert, for god to punish it as he wants. Full of sins, it would not make a good sacrifice for god, after all.
While punishments are widespread, pure sacrifices of humans are very much optional for religions. In the religions of the book it only appears (to my knowledge) in YHWH's fucked up little mind games he plays with Abraham, with the sacrifice being stopped. The Romans -- themselves not shy about infanticide -- likewise stamped it out where they could.
Of course, there are also mixed forms. For example, the Christian tradition of burning someone at the stake for religious transgressions is very much reminiscent of burnt sacrifices by earlier religions. I think that sometimes, it is explicitly stated that the purpose of this form of death penalty is to purify the victim so that they can get into heaven despite their crime. This is more seen as a 'favor' to the victim than as a favor to god, but parsing it as "souls for the soul lord!" does not seem entirely wrong.
The idea that it was Pilate's job to follow "due process" and that he was "derelict in his duty" is delightfully ahistorical. The laws which Pilate followed were the laws of Rome. Roman law was not very concerned with the rights of non-citizens, their brothels and salt mines were full of slaves. And Jesus was very much not a Roman citizen. As a military governor, the job ob Pontius Pilate, as far as the Senate was concerned, was to keep the peace and facilitate the extraction of wealth. How he did this was totally up to him. If one day he woke up and decided to drown a tenths of the infants in Jerusalem in boiling pig fat, Rome would only object to that as far as it lead to instability.
The fact that he even personally bothered to preside over the case is more a concession to the political touchiness of the subject than any due process. Quite frankly, the local elites were really pissed at Jesus because he had interfered with their religion by causing a ruckus with the money-changers (which ultimately threatened their business model). And Pilate decided that it would be in Rome's best interests to placate them by putting Jesus to death. Given that the followers of Jesus did not rise up in rebellion, it is hard to argue that he was wrong with his decision. (A Gibbonite would blame the fall of Rome on Christianity, but Pilate could not possibly have foreseen that.)
Quite frankly, by messing with religious institutions, Jesus was kind of asking for it, either intentionally or in a FAFO way. Most places and times did not have strong freedom of speech norms, and Jesus would have fared little better if he had criticized dominant religious practices in pretty much any culture. If he had tried his little stunt in front of the temple of Athena or Saturn or Odin or a medieval cathedral or in early Boston or in front of a mosque in contemporary Tehran or Riyadh or in front of some Buddhist temple in Myanmar, he would have fared little better. Sure, in today's Western world, he might have gotten away with just a night in a prison cell and a fine (or no penalty at all if he had opted to practice his free speech by just demonstrating with a sign "God hates money-changers"), but of all the atrocities committed in the name of Rome, the killing of Jesus likely does not even make the top million.
Implying that this vastly destructive war that killed 60 million people could or should have been handled differently or, God forbid, avoided is basically heresy.
I do not think that saying "Hitler should not have attacked Poland" is very controversial, so you are likely not talking about what the Nazis could have done differently. In fact, the Western Allies tried to avoid the war by appeasing Hitler, because nobody was keen on repeating WW1. Now, you can argue that the UK and France should just have sat this one out, watching from the sidelines as Hitler takes Western Poland and then invades the USSR. Sure, that would have avoided the Blitz and the invasion of France -- or more accurately postponed them until Hitler was done with the East, but the immensely destructive war on the East front would still have happened. What is your recipe for avoiding that one? The USSR retreats to Siberia and lets Hitler take Moscow?
Nor is it very controversial that Stalin was not a nice person and it would have been better if he had behaved differently.
In the particulars, the behavior of Western allies is also substantially criticized. For example, ACOUP on strategic air power
I must admit I do not generally extend this charity to fellows like Arthur Harris or Curtis LeMay who were fairly explicit that their goal was to simply kill as many civilians as possible in order to end the war.
Or take the Internment of Japanese Americans
In 1983, the commission's report, Personal Justice Denied, found little evidence of Japanese disloyalty and concluded that internment had been the product of racism. It recommended that the government pay reparations to the detainees. In 1988, President Ronald Reagan signed the Civil Liberties Act of 1988, which officially apologized and authorized a payment of $20,000 (equivalent to $53,000 in 2024) to each former detainee who was still alive when the act was passed.
I think that the job of housewife is on its way out, and has been on its way out for the last century.
Back in 1800, with no washing machines or fridges, it was a full-time job to take care of the needs of a family (especially as family size was large due to lack of contraceptives). A man (or anyone) who worked full time simply did not have the time to take care of washing his clothes and cooking his meals.
Luckily, we made these chores much less time-consuming and freed women to do more useful work. And they do. There are mothers who are teachers, physicians, clerks and a myriad of other professions.
Naturally, the markets (especially housing) have reacted to this reality (plus a ton of other factors), and the age where you could raise a family with a single income from not-highly-specialized labor is over.
As you point out, social changes have made the strategy of just marrying a man and relying on him to provide for you high-risk, because if he is rich enough to pay for you to stay at home and watch the kids, he is likely also rich enough to replace you with a younger, more attractive woman in a decade or two.
I think that a big point of both men and women going to college is the signaling value both towards employers and towards potential mates. Roughly, the same qualities which are valued in an employee (somewhat smart, willing to submit to an institutional system, ability to achieve long-term goals, etc) are also good qualities in a partner. A degree, especially in a strongly regulated field like law or medicine, will significantly update your estimate on the earning potential of a person. Then there is education as a mark of social class. A man from a family of academics will probably not marry someone who dropped out of high school. (Sure, there will always be some men who prefer to marry 18yo village girls, but "I will just wait for some Trump-like man to marry me" will not work for the vast majority of them.)
I agree that there are probably bullshit degrees pursued by women who really want to graduate college with an MRS degree, but I think that the answer is not not cut down on women in college, but to push degrees which can actually earn money.
The pattern "Earn a degree, get pregnant at 30 and then become a stay-at-home mum" is obviously not very efficient. But I don't think we will go back to "get pregnant at 20 and then become a housewife". What society should aim for is "Earn a degree, get pregnant at 30 (if you want), re-enter the workforce a few years later (e.g. part-time)".
I think that sourcing the basics, e.g. a breadboard, wired resistors, capacitors, LEDs, jumper wires, some opamps, is not that hard.
Amazon or (in Germany) Conrad have you covered there (if you don't mind overpaying compared to what the parts would cost in bulk).
If you increase your budget to 200$, then different people will want very different things. Matrix LCDs, TTL logic chips, myriads of sensors, servos. Some will want passive SMD components (with different preferences to size).
And in that stage, they probably also want components which are not sold by Conrad, which is when things get painful.
There are, of course, companies which carry zillons of electronic components, e.g. Farnell, Mouser, RS, Digikey. Their stock is well curated, you can filter based on dozens of criteria until you end up with what fits your needs. In fact, having used these websites I have come to despise the shopping experience on Amazon, where little in the way of curation happens and accessories for X regularly appear in the category X.
Alas, these electronics vendors do not typically sell to hobbyists. Presumably, cutting five chips from a reel and packing them for sale is not in itself very profitable, but simply a prerequisite to sell a reel of your chips to companies, eventually. Unlike corporations, private persons rarely scale up their projects to a scale where serious money gets spent, and complying with the consumer protection regulations is just not worth it.
So you sometimes find yourself in the situation where you know that four different companies carry the chip you want, but none of them want to sell to you. (These days, it might be possible that you can get it from China, if you don't mind the wait, though.)
I think that "write an effortpost on substack/LW/reddit/tumblr/..." might actually be a fun essay assignment (even if it would be hard to grade if the teacher lacks subject knowledge).
I think that one problem with essay assignments is that the student is typically aware that it is extremely well trod ground. Generations of students before them have written about theme X in book Y. The chance that they will make a point which will cause the teacher -- the one person who will (optimistically) read their essay (unless they also leave the grading to an LLM) -- to actually wake up and go "wait a minute, this is new" are very slim.
"Everything has been said before, but not yet by everyone" and all that.
It is like tasking someone to simulate having sexual intercourse with a sex doll and then being surprised if the person is not showing a lot of effort.
To be blunt, college hasn’t been about education for a very long time, and it strikes me as hilarious that anyone who attended one writes these sorts of handwringing articles bemoaning the decline of education in college. 99% of students who were ever in university (perhaps with the exception of tge leisure class) have ever gone to college seeking the education for the sake of education. For most of us, it’s about getting job skills, getting a diploma, padding a resume, etc. if learning happens on the side, fine, but most people are looking at college as a diploma that will hopefully unlock the gates to a good paying job.
While I can only speak for myself, I studied a STEM subject because I was genuinely interested in it. Sure, the fact that STEM people usually find well-compensated work was a consideration, but not the major one. I certainly did not research which subject would have the highest expected salary. I also embarked on a lengthy PhD for rather meager pay, but I was fine with that.
Some of the stuff I learned as a student I get to use in my job, while some other stuff I sadly/luckily do not have reason to use. And as usual, a lot of the relevant skills I picked up outside class.
I am also somewhat privileged in that my parents paid for my education (i.e. the cost of living in a small room for 5+ years -- universities themselves are almost free in Germany). But I never felt I was attending just for the signaling value of the diploma.
Have private businesses operate the dorms and cafeterias (plural, they need to compete) and let students live off-campus the moment they want.
This is how we generally do things in Germany, to a large degree.
Okay, almost. The "Studentenwerk" (a government-sponsored citywide institution) typically runs a canteen on campus and also provides low-end housing significantly below market value (typically off-campus, though), but they are legally distinct from the university, and students are not required to interact with them in any way (besides paying a minimal fee, perhaps). Plenty of students rent private rooms or flats and prefer private food vendors.
No electronics other than sometimes a scientific calculator. No graphing calculators since they can be programmed with the relevant formulae; including fake screens that say all memory has been just now wiped.
Hot take: calculators are for experimental physics exams. In mathematics, they should not be required. If the exam is about multiplying five digit integers, then a calculator would defeat the purpose of the task. If the exam is about integration, then you can easily make sure that there will not be a lot of five digit integers to multiply.
Granted, some math classes are mostly to enable students to use calculators for their science classes. So sure, if the point is to learn to calculate logarithms with a calculator, you require a calculator -- no point in having students learn to use a slide rule. Likewise, for basic probability theory, a calculator will make a lot more practical applications accessible.
For my last two years of high school, Texas Instruments had somehow convinced my school board that their graphic calculators were great and educational. Our final tests featured tasks such as "determine the approximate root of this function with the graphical calculator". We did not cover a lot of math in these two years. I like to hope that graphical calculators are not a thing any more (a smartphone can do anything such a calculator can do, but much better), but if they still are, I would implore any school board deluded enough to think they would help teach math to at least make it a priority that the devices they mandate come with a decent programming language (LISP, Python, Haskell, Perl, whatever) so that kids do not have to waste two years programming in TI BASIC instead of paying attention to class.
wrt strict liability, there is a whole 60 page lawcomic arch about it.
I am mostly on board with Nathan there. Strict liability for regulatory offenses seems bad, and relying on luck / selective enforcement / prosecutorial discretion to keep people who collect a few feathers out of jail seems bad.
My main disagreement with that arch is DUI. For one thing, the offense is not hidden in some law about fishery regulations that nobody has read, you get told about it when you train for your driving license. For another, when driving a car we actually expect people to pay close attention to stay within the regulations which they were trained on. "Yes, I should have stopped on that left-yields-right intersection, but you see, I just assumed that there was no car coming from the right and did not look, so I clearly lack mens rea" or "Officer, my speedometer is broken. I thought I was within the speed limit" will not fly, then why should "I know that DUI is a crime, and I know that I had a few drinks, but I was under the impression that I was slightly under the BAC limit"?
A similar objection could be made to the argument against statutory rape. Everyone knows that people presenting as young adults come in two flavors, "jailbait" and "legal". Anyone who has sex with such a person without verifying their category is taking a calculated risk. There might be other arguments against that law, but the fact that the person committing the offense could not possibly have known rings hollow to me.
If (1), (2), and (3) are true, then something like UBI can be seriously considered and we can all live in Fully Automated Luxury Gay Space Communism.
This is similar to a point made on LW a few weeks ago, as a critique to the national security framing of ASI.
Almost none of the people who are likely to build ASI are evil on a level where it would matter in the face of a technological singularity. At the end of the day, I don't care much how many stars are on the flags drawn on the space ships which will spread humanity through the galaxy. Let Altman become the God-Emperor of Mankind, for all I care. Even if we end up with some sick fuck in charge who insists on exclusively dining on the flesh of tortured humans, that will not really matter (unless he institutes a general policy of torturing humans).
Who is the first to build AI matters only if
(1) AI alignment is possible but difficult, or
(2) AIs will fizzle out before we get to post-scarcity.
Of course, both of these are plausible, so practically we should be concerned with who builds AI.
I think a plateau is inevitable, simply because there’s a limit to how efficient you can make the computers they run on. Chips can only be made so dense before the laws of physics force a halt. This means that beyond a certain point, more intelligence means a bigger computer. Then you have the energy required to run the computers that house the AI.
While this is technically correct (the best kind of correct!), and @TheAntipopulist's post did imply an exponential growth (i.e. linear in a log plot) in compute forever, while filling your light cone with classical computers only scales with t^3 (and building a galaxy-spanning quantum computer with t^3 qbits will have other drawbacks and probably also not offer exponentially increasing computing power), I do not think this is very practically relevant.
Imagine Europe ca. 1700. A big meteor has hit the Earth and temperatures are dropping. Suddenly a Frenchman called Guillaume Amontons publishes an article "Good news everyone! Temperatures will not continue to decrease at the current rate forever!" -- sure, he is technically correct, but as far as the question of the Earth sustaining human life is concerned, it is utterly irrelevant.
A typical human has a 2lb brain and it uses about 1/4 of TDEE for the whole human, which can be estimated at 500 kcal or 2092 kilojoules or about 0.6 KWh. If we’re scaling linearly, if you have a billion human intelligences the energy requirement is about 600 million KWh.
I am not sure that anchoring on humans for what can be achieved regarding energy efficiency is wise. As another analogy, a human can move way faster under his own power than its evolutionary design specs would suggest if you give him a bike and a good road.
Evolution worked with what it had, and neither bikes nor chip fabs were a thing in the ancestral environment.
Given that Landauer's principle was recently featured on SMBC, we can use it to estimate how much useful computation we could do in the solar system.
The Sun has a radius of about 7e8 m and a surface temperature of 5700K. We will build a slightly larger sphere around it, with a radius of 1AU (1.5e11 m). Per Stefan–Boltzmann, the radiation power emitted from a black body is proportional to its area times its temperature to the fourth power, so if we increase the radius by a factor of 214, we should increase the reduce the temperature by a factor of sqrt(214), which is about 15 to dissipate the same energy. (This gets us 390K, which is notably warmer than the 300K we have on Earth, but plausible enough.)
At that temperature, erasing a bit will cost us 5e-21 Joule. The luminosity of the Sun is 3.8e26 W. Let us assume that we can only use 1e26W of that, a bit more than a quarter, the rest is not in our favorite color or required to power blinkenlights or whatever.
This leaves us with 2e46 bit erasing operations per second. If a floating point operation erases 200 bits, that is 1e44 flop/s.
Let us put this in perspective. If Facebook used 4e25 flop to train Llama-3.1-405B, and they required 100 days to do so, that would mean that their datacenter offers 1e20 flop/s. So we have a rough factor of Avogadro's number between what Facebook is using and what the inner solar system offers.
Building a sphere of 1AU radius seems like a lot of work, so we can also consider what happens when we stay within our gravity well. From the perspective of the Sun, Earth covers perhaps 4.4e-10 of the night sky. Let us generously say we can only harvest 1e-10 of the Sun's light output on Earth. This still means that Zuck and Altman can increase their computation power by 14 orders of magnitude before they need space travel, as far as fundamental physical limitations are concerned.
TL;DR: just because hard fundamental limitations exist for something, it does not mean that they are relevant.
And yes, I know that "AI" is still a misnomer, I understand that LLMs are just token predictors, and I think people who believe that any neural net is close to actually "thinking" or becoming self-aware, or that really, what are we but pattern-matching echolaliac organisms? are drinking kool-aid
I am kind of in the middle ground between "they are just stupid stochastic parrots, they don't think!" and "obviously they will develop super-intelligent subagents if we just throw more neurons at the problem!", while I suspect that you are a bit more likely to agree with the former.
The latter case is easy to make. If you train a sufficiently large LLM on chess games written in some notation, the most efficient way to predict the next token will be for it to develop pathways which learn how to play chess -- and at least for chess, this seems to mostly have happened. Sure, a specialized NN whose design takes the game into account will likely crush an LLM with a similar amount of neurons, but nevertheless this shows that if your data contains a lot of chess games, the humble task of next-token-prediction will lead to you learning to play chess (if you can spare the neurons).
By analogy, if you are trained on a lot of written material which took intelligence to produce, it could be that the humble next-token-predictor will also acquire intelligence to better fulfill its task.
I will be the first to admit that LLMs are horribly inefficient compared to humans. I mean, a LLM trained on humanity's text output can kinda imitate Shakespeare, and that is impressive in itself. But if we compare that to good old Bill, the latter seems much more impressive. The amount of verbal input he was trained on is the tiniest fraction of what an LLM was trained on, and Shakespeare was very much not in the training set at all! Sure, he also got to experience human emotions first-hand, but having thousand of human life-years worth of description of human emotions should be adequate compensation for the LLM. (Also, Bill's output was much more original than what a LLM will deliver if prompted to imitate him.)
Of course, just because we have seen an LLM train itself to grok chess, that does not mean that the same mechanism will also work in principle and in practice to make it solve arbitrary tasks which require intelligence, just like we can not conclude from the fact that a helium balloon can lift a post card that it is either in principle or in practice possible with enough balloons to lift a ship of the line and land it on the Moon. (As we have the theory, we can firmly state that lifting is possible, but going to the Moon is not. Alas, for neural networks, we lack a similar theory.)
More on topic, I think that before we will see LLMs writing novels on their own, LLMs might become co-authors. Present-day LLMs can already do some copy-editing work. Bouncing world building ideas off an LLM, asking 'what could be possible consequences for some technology $X for a society' might actually work. Or someone who is skilled with their world-building and plotlines but not particularly great at finding the right words might ask an LLM to come up with five alternatives for an adjective (with connotations and implications) and then pick one. This will still not create great prose, but not everyone reads books for their mastery of words.
Now, I hate the NYT as much as anyone, but the first paragraph after your no way out quote says:
This is the real threat, not the squabbling over federal funds. Harvard might swim in cash, but they also live of their ability to draw in the best students from half the world. For billions of people worldwide, the answer to the question "Where would you study if you were super-smart and wanted to win a Nobel?" is "Ivy league, or a few prestigious state-run universities in the US". In the future, the answer for all but 340M (plus Canadians, perhaps?) will change to "... except Harvard, which does not take international students."
My understanding of the US private universities is that their students are either very rich and smart or brilliant and on a stipend. It is a symbiotic relationship: the rich student pays for both of them getting a prestigious, excellent education, and the brilliant student makes sure that the prestige of the university is maintained.
About 27% of Harvard's students are international (a lower number than I would have expected). I think that the "rich and smart" internationals can be replaced without too much trouble, you would not have to lower standards very much to find still very smart Americans willing to pay for the privilege of studying at Harvard. I did not find what fraction of students is studying for free at Harvard, never mind how many of them are internationals, but I suspect that the overall fraction of students on a stipend is small, and that a significant fraction of them are internationals. Replacing these with US nationals will likely hurt.
Also, there are cascading effects. If you are a brilliant young American, would you rather go to a university where you can meet the best minds of your generation (or so they would claim), or one where you can only meet the best US minds of your generation who do not care about that very fact?
The obvious reaction (if the courts uphold Trump's decision) for Harvard would be to announce them opening a branch in Canada, but that is not easily done.
More options
Context Copy link