RandomRanger
Just build nuclear plants!
No bio...
User ID: 317
Admittedly, almost none of these extremely well-dressed young people look like they can fight in their outfits, or do rock-climbing in heels... Form is clearly privileged over function.
It is very much a game for normies, you play as a pure-good hero. Sometimes the saccharineness of it all does get a bit over the top. But it is remarkable to think how far it can go into horror only for the artstyle to make it not-really-horror.
Well that still wouldn't work since they would've caught up to the big ship since it was imminently about to run out of fuel, attacked it, discovered the crew were mostly absent, then traced it back to Crait.
However you're clearly right, I'm surprised the cloaking/stealth angle wasn't in the extensive plot description on the wiki where I checked first, before AI confirmed you: https://starwars.fandom.com/wiki/Star_Wars:_Episode_VIII_The_Last_Jedi
If the plan was 'escape to not-Hoth and hole up in a fortress against much stronger space and ground opponents' then it may as well not exist. I don't recall any reference to cloaked ships either.
The amount of content in Genshin Impact is absolutely staggering. I know everyone thinks of it as just a gacha but it's gigantic compared to full-price open world rpgs too.
It's got a huge map with lots of verticality and ambient content, puzzles and regional features. It actually looks pretty, unlike Skyrim. A tonne of characters with their own unique abilities. Mechanical complexity beyond just stacking on more attack.
And the story just goes on and on and on. There's a fair bit of BS they put in like 'you NEED to sit through endless tedium with Zhongli gathering ingredients for this ritual before you can go to not-Japan', clearly they want to do artsy character-development and worldbuilding stuff rather than just gacha moneygrabbing. That's just the main story, which is well over 100 hours at this point.
Then there are heaps of character side missions which are also long and voiced. And then dozens and dozens more area missions which are unvoiced but are still long with plenty of cinematics. Or just exploring the huge map, that would take ages.
There are more limited-time events than you can shake a stick at. Player housing. A card game. Really the only things missing are deep endgame like Path of Exile's mapping system and proper modding.
There is however a grindy element to it, you have to kill a bunch of bosses over and over again and pick huge numbers of flowers to level your characters and advance through the story. Surprisingly it seems you can't even skip this by paying, which seems odd to me given it is gacha. You're supposed to pay to get the character you want, not to skip the grind. I refuse to pay either way.
But WTF were Starfield devs doing with their 400 million if the Chinese can make something so huge for 800 million, get whales to pay for it all and have it actually be good too? Does Call of Duty Black Ops really cost nearly that much to make?
Apparently Genshin Impact is second only to Monopoly Go in cost, the latter is pure marketing: https://en.wikipedia.org/wiki/List_of_most_expensive_video_games_to_develop
In 2025, the game surpassed $1 billion in marketing spending since its launch.[12] The game generated $5 billion by April 2025.[3]
What a sad state gaming is in, Monopoly Go makes considerably more money. There's always sloppier slop.
It's especially remarkable because she was telling them all to trust the plan.
But she had no plan. We are informed in episode 9 that the 'Holdo Maneuver' of FTL ramming was 1 in a million, it can't be reliably repeated. So she was really just trying to flee, only to get spectacularly lucky.
Such a pity that both Bizonacci and Leonardo of biz vanished from the earth. So much was lost when they messed with the captcha rules on /biz/ under admittedly severe pressure from shills, the board is barely a shadow of what it was. Someone made a full Chainlink waifu picker CYOA! Someone made a thematically-appropriate Chainlink RPGmaker game where you wait around for ages. NFTs that were just a printout of the amount of gas you paid to get them. Tasteful monerochan lewds. All these memories lost, like tears in the rain.
You have succeeded where AI and myself have failed.
Maybe he has other aspirations?
"Desolate Ancient Moon looked at him with pity, but her voice was tranquil: 'Rockman, I did not want to kill you. But you blocked my path to success.'"
Tao Zhu also comes to mind on the topic of RI and blowing a cute wife and huge wealth... Also, people can just be dumb without higher meaning in their actions.
What predictions does he make that you think are wrong?
Arctotherium says this regarding AI:
For example, much writing on AI accurately points out how reliant the US AI industry is on foreign talent, with around 70% of high-end researchers being foreign born, and then condemn the Trump administration for hostility to immigration. But they typically fail to point out the tiny numbers involved.
We’re talking maybe 10000 people total in the entire world, with annual fluxes into and out of the US, including during the open-borders Biden years, in the hundreds. It is entirely possible to recruit as much of this talent as is willing to move to the United States while cutting skilled immigration by 99%, and we should. OpenAI technical staff and people Mark Zuckerberg is willing to pay a hundred million dollars to recruit are not generic H-1Bs or foreign students, and conflating the two is dishonest.
Almost all asteroids are worse sources of metals
Yeah but some of them are much better source of minerals than any mine on earth. Iridium is hard to acquire here for one thing.
Apparently this one is pretty rich: https://en.wikipedia.org/wiki/(6178)_1986_DA
The asteroid achieved its most notable recognition when scientists revealed that it contained over "10,000 tons of gold and 100,000 tons of platinum", or an approximate value at the time of its discovery of "$90 billion for the gold and a cool trillion dollars for the platinum, plus loose change for the asteroid's 10 billion tons of iron and a billion tons of nickel."[10] In 2024 the estimated value of 100,000 tons of platinum was worth approximately 3.4 trillion US dollars. The delta-v for a spacecraft rendezvous with this asteroid from low Earth orbit is 7.1 km/s.[11]
Even Australian production of iron ore is barely a billion tonnes per year, that's a lot of iron. Iron is at least digestible by the world economy whereas there'd be a glut of gold and platinum.
I have no problem with waiting. Personally I think that leaving Earth's orbit pre-fusion propulsion is silly. But with fusion propulsion lots of opportunities are opened up, one scarcely needs to worry about delta-v within the solar system.
I mean this is just ridiculous:
On 28 February 2020, NASA awarded SpaceX a US$117 million contract to launch the Psyche spacecraft, and two smallsat secondary missions, on a Falcon Heavy rocket.[48] The spacecraft was successfully launched on 13 October 2023, at 14:19 UTC,[49] with an expected arrival in 2029.
6 years! And by the time you get there nothing can be done, chemical rockets are the astronomical cuck chair. You just get to watch the asteroid tumble on.
For decades, our biggest strategic advantage has been that the smartest, most ambitious people from all over the world wanted to come here.
There's a relevant essay from Arctotherium on this, you don't have to have mass immigration to bring in the top Taiwanese semiconductor experts, or German nuclear scientists or post-Soviet Russian STEM experts. You can bring in a few hundred or a few thousand people on 10x wages, have them stay for a few years to teach locals the skills and then have them leave or retire into obscurity.
China for instance brought in South Korean shipbuilding experts on high wages, worked out how to build ships and now dominates the world shipping industry. They tried this with semiconductors too, Taiwan actually passed laws to stop Chinese companies poaching semiconductor talent with high pay. Meiji Japan did this too, alongside others he mentions. Targeted skill acquisition does not require mass immigration.
https://substack.com/home/post/p-169701612
The US is very wealthy, they could close the door to the median-wage immigrants and keep the top talent, even aggressively headhunt top talent with high payouts. Not 'I published a crappy paper in one of those journals that exists for resume packing' but 'I'm actually really smart and have these rare skills'.
Furthermore, there are all kinds of problems with relying on mass immigration.
There is indeed a large amount of Indian talent, I see Indian names on various AI papers regularly. So why isn't India rich or at least on par with China? There's no Indian Deepseek, Huawei, BYD, J-20. There may well be something wrong with Indian culture or society that impedes this kind of development. Mass immigration would likely import this problem to some extent.
Suppose there's a disaster in America, it's one of those situations where all hands need to be on deck for a massive crisis. Would the Indians, Chinese, Latin Americans perhaps think 'not my problem' and head back to their home countries rather than giving their utmost? If they leave their country for a better life once, they can do it again if the situation changes.
Whatever issues with unity there are in America, it's hardly going to be helped by mass immigration. More ethnicities and diversity increases the potential for conflict. There are also the more basic costs of unfiltered 'Fuck Trump' mass immigration of randoms who come in via Mexico: drugs, crime, welfare payments, gaming the electoral system, demographic replacement.
Now it's fairly reasonable that some truly elite people will be turned off by the administration's rhetoric, even if the Trump admin did go 'we want the super smart but not the mediocre'. They might not want to come to America because overseas mainstream media blares out FASCIST USA. But it's not clear that this would be that bad compared to mass immigration.
We can see the results: Australia, Canada and the UK have been doing mass immigration. Racism has been suppressed by hate speech laws. The economic results/innovation in these countries have been underwhelming at best. Canadian GDP per capita has stagnated over the last 10 years. Britain is mired in all kinds of problems.
The strongest argument against Trumpism IMO is that it puts these loudmouths in charge, who go around openly declaring their strategies and letting their opponents counter them: https://x.com/Jukanlosreve/status/1958334108989530207
They're simple and unsophisticated thinkers in a complex world.
But even there, you don't have to be loud and obnoxious to be dumb. The EU is full of sober, hard-working, reasonable and civilized leaders who do immense damage to Europe by constantly making terrible decisions.
I remember reading the first two books too, it was strange. Who gets misty-eyed about a red desolate wasteland? Bring forth the water!
Also I think that terraforming Mars is a red herring. Are we really short of lebensraum on Earth? Easier to build cities and extract resources in Canada, Antarctica, the deep oceans, Russia, the Sahara.
O'Neill cylinders are also a good option. You can put them anywhere.
Expansion into space should be with a definite, clear objective. What about Mercury, is there not a tonne of solar power there? Should we not put heavy industry there, or perhaps in Lagrange points closer to the sun? There are resources in the asteroids, let's get them. Let's get offworld certainly, advance as a civilization, secure Mars... but only with good reason. The costs must be outweighed by the benefits.
And why assume that we need Mars to be compatible to organic life to be there? It's probably easier to get robotic or otherwise hardened bodies than it is to make Mars a credible place for settlement.
There's a good Nick Land essay about this where he argues that space exploration is really about planetary disassembly by posthuman intelligences rather than domestead frontier LARPing. But the true vision can't be sold to the voters and politicians since it's too Nietzschean. Alas I cannot find it.
Perhaps I was too flippant with the 'There are heaps more' applications for AI. I get this newsletter from alexander kruel almost daily where he gives a tonne of links about what people are using AI for. For example:
Interviewing people in the Phillipines (better than humans apparently). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5395709
62% of coders in this survey are using it: https://survey.stackoverflow.co/2024/ai
76% of doctors are using it: https://www.fiercehealthcare.com/special-reports/some-doctors-are-using-public-generative-ai-tools-chatgpt-clinical-decisions-it
It's thought that the US govt might've decided what tariffs to impose via AI: https://www.newsweek.com/donald-trump-tariffs-chatgpt-2055203
It goes on and on and on...
I personally used it for proofreading and indeed it can't do all of an editor's job. Editors do lots of highly visual tasks managing how words fit on the page in ways that AI isn't so good at. But it can do some of an editor's job. It can do much of a cartoonist's job (Ben Garrison is in the clear for now with his ultra-wordy cartoons?). I think it's more than fast drunk college student and more than meaningless drivel.
What are you talking about? A rational civilization will want to grow. They'd seek access to more resources. Exponential growth in population demands it.
it doesn’t address that when kids are optimized, parents want something back from that
They could legislate and move against zero-sum competitions, especially if they're a civilization composed of geniuses. We can avoid zero sum competitions and handle collective action problems sometimes. So can they. Imagine they've been through these cycles and traumas and declines many times, their history is thousands of years longer. They'd learn eventually.
A powerful civilization is not South Korea with a few more fancy gadgets, just like we are not Ancient Egypt with combustion engines. The whole structure of their society would have developed to fit with their technology base. They would be on a whole other level to us.
Perhaps there are no families and engineers are in charge of making children by carefully splicing together genes, there are no parents, only technical factors, input and output. Perhaps they're educated and raised in a series of simulations carefully orchestrated by AI so they have excellent skills and character. Perhaps they're uploaded beings that can reproduce in a tenth of a second, printing out bodies like clothes.
A conservative assumption is that they'd have biological immortality which renders fertility much less relevant.
LLMs are value-neutral, it's all about how they're used.
I was just doing some RP with one, exploring a silly concept, inventing the rules along with Claude. You can tell when the LLM is actually enthusiastic about it and when it's just phoning it in. (With Claude, you know it's getting real when the cat ASCII art starts coming out unprompted).
People might say 'oh this is cringe slop'. There were indeed a heap of em dashes. But you don't actually see the em dashes if you're smiling.
Perhaps 99/100 alien civilizations succumb to silly governance. But if they're capable of reaching us then we should assume they're actually competent.
An actually competent civilization is nothing like ours. Actually competent civilizations would go all in on eugenics the moment they came up with it, cloning too. Actually competent civilizations would spend surplus wealth not on subsidizing boomers or makework jobs but on building out infrastructure, investment, R&D. They'd do things we wouldn't even think of but would make sense in retrospect, they take all the low-hanging fruit and the high-hanging fruit too.
A popular sci-fi writer doesn't actually hold universal deep wisdom, he just produces fiction we find interesting. 'Nobody can figure out how to program common sense' is a fun, self-congratulatory fictional idea. But it's not actually true. It was based on an old paradigm and has been disproven recently, irregardless of how much people might want it to be true.
There are all these potential objections like 'what if optimizing for IQ results in a nation of 'gifted' child prodigies who burn out in adulthood'? Sounds like a clever objection but there's no actual truth behind it in and of itself. You could adjust your education strategy for this, test, iterate, improve...
'Maybe all this AI stuff is just a great big bubble' is another tale people want to be true. Maybe it is true, perhaps there's some hard wall that scaling, algorithmic improvements, synthetic data and so on just can't surpass. I wouldn't bet on it.
No, you cannot avoid hangwon and gaokao if you have designer babies.
Why not? You could structure the economy such that it wasn't just a few chaebols who dominate everything. You could give affirmative action to applicants with siblings. There are any number of things that a country could do. They could give the top students in exam a harem and tell him to produce 50 kids.
A powerful alien civilization has no need for us as contributors. A few billion low IQ humans are quantitatively and qualitatively inferior to whatever they could cook up with local resources. They would be rightly wary of disrupting their hyperefficient status quo with foreign blood.
If aliens are here, they're doing research to better understand social dynamics because if there's even marginal gains in better understanding the universe, they'll take that cost.
Do we have a pretty good approximation for macro phenomena?
Absolutely not. Dark matter and dark energy make up 95% of the universe.
If they can do interstellar warfare, they should be capable of ASI or at least mass-cloning of geniuses with the same biology. Maybe they have 'ethics' that block those two and they're trying invade-the-galaxy, invite-the-galaxy for political reasons?
But how likely is it for an advanced civilization to have such a flawed system of govt?
OK, how about losses or profits? Or 20008? I cited pubic law because it's funny, the other two are actually real examples from what I was getting it to do.
the Prime Minister's name is "Morrison" not "Morison"
I highly doubt Google docs could do tasks that require contextual understanding without some kind of LLM.
You say "they aren't selling $200 worth of inference for $20" I say "Are they selling $2 of inference for $20"?
Why don't we try and look into this? People have tried to estimate OpenAI margins on inference and they come away with strong margins of 30, 55, 75%. We don't live in a total vacuum of information. When trying to work out their margins on inference, I base my opinion on the general established consensus of their margins.
they need wildly good margins on inference if they believe they'll never be able to cut the other fixed and variable costs
The demand for inference is rising, Openrouter records that demand for tokens rose about 30x in the last year as AI improves. Grow big enough and the margin on inference will outweigh the costs.
They are getting paid to do it
It's effectively free, they're 'selling' it for $1 per agency for a whole year. OpenAI is doing the same thing. Why are you trying to correct me on something you won't even check?
There is a significant difference between making a loss as you expand your business rapidly and try to secure a strong position in an emerging market and 'subsidized by 1-2 orders of magnitude'. No evidence has been supplied for the latter case and it's unbelievable.
Amazon wasn't making a profit because they were continuously expanding and investing in their retail business, not because the actual business was unprofitable. Investors were happy to tolerate them not making profits because they were growing. Uber wasn't making a profit but there were no 10x subsidies. We can see this immediately in how taxis weren't costing $20 while Uber was costing $2 for the same trip.
Does anyone seriously think that these tech companies are selling $200+ worth of compute for $20? The natural assumption should be that they're making good margins on inference and all the losses are due to research/training, fixed costs, wages, capital investment. Why would a venture capitalist, who's whole livelihood and fortune depends on prudent investment, hand money to Anthropic or OpenAI so they can just hand that money to NVIDIA and me, the customer?
Anthropic is providing its services for free to the US govt but that's a special case to buy influence/cultivate dependence. If you, a normal person, mega minmax the subscription you might use more than you pay for but not by that much and the average subscriber will use less. Plus you might praise it online and encourage other people to use the product so it's a good investment.
What evidence points in this direction of ultra-benign, pro-consumer capitalism with 10x subsidies? It seems like a pure myth to me. Extraordinary claims require extraordinary evidence.
Take OpenAI. Sam Altman said he was losing money on the $200 subscription. But Sam Altman says a lot of things and he didn't say 'losing 10x more than we gain'.
The company has projected that it would record losses of about $5 billion and revenue of $3.7 billion for 2024, the New York Times reported in September. The company’s biggest cost is due to the computing power used to run ChatGPT. Not only does it require huge investments in data centers, it also demands vast amounts of electricity to run them.
If the company is losing 150% of revenue (and Anthropic is similar), not 1000% or higher, then clearly it's what I'm saying, not what you're saying. Inference/API is profitable. User subscriptions are profitable. Investment is not profitable in the short term, that's why it's called investment. And they have their fixed costs... That's why AI companies are losing money, they're investing heavily and competing for users.
Furthermore, one study of a selected group of coders doing a subset of software tasks with old models does not disprove the general utility of AI generally, it's not a major, significant fact. I could find studies that show that AI produces productivity gains quite easily. That wouldn't mean that it produces productivity gains in all settings, for all people.
Here's one such study for instance, it finds what you'd expect. Juniors gain more than seniors.
https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-affects-highly-skilled-workers
Or here he lists some more and finds productivity gains with some downsides: https://addyo.substack.com/p/the-reality-of-ai-assisted-software
The metr paper just tells (some) people what they want to hear, it is not conclusive any more than the other papers are conclusive. And a lot of people don't read the metr paper closely. For instance:
Familiarity and inefficiency in use: These devs were relatively new to the specific AI tools. Only one participant had >50 hours experience with Cursor; notably, that one experienced user did see a positive speedup, suggesting a learning curve effect. Others may have used the AI sub-optimally or gotten stuck following it down wrong paths.
And none of them will fix errors like 'pubic law'. It won't notice when 'losses of profits' should be 'losses or profits'. It won't call out a date of 20008.
How about proofreading a long document? You can get LLMs to go through page by page and check for errors like sate instead of state, pubic instead of public, dependent vs dependant...
That has to be most boring and obvious application. There are heaps more.
Or how about making making cartoons? These aren't too bad: https://x.com/emollick/status/1920700991298572682
An LLM can make nice little toy python class or method pretty easily, but when you're getting into complex full stack development, all sorts of failure modes pop up
I'm using it for full stack development on a $20 plan and it works. I guess it depends on what you mean by complex full stack development, how complex is complex? I wouldn't try to make an MMO or code global air traffic controls with AI but it can definitely handle frontend (if supervised by a human with eyes), backend, database, API calls, logging, cybersecurity...
And sure it does fail sometimes with complex requests, once you go above 10K lines in one context window the quality lowers. But you can use it to fix errors it makes and iterate, have it help with troubleshooting, refactor, focus the context length on what's critical... Seems like there are many programmers who expect it to one-shot everything and if it doesn't one-shot a task they just give up on it entirely.
The metr paper is somewhat specialized. It tests only experienced devs working on repositories they're already familiar with as they mention within, the most favourable conditions for human workers over AI: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Secondly, Claude 3.7 is now obsolete. I recall someone on twitter saying they were one of the devs in that study. He said that modern reasoning models are much more helpful than what they had then + people are getting better at using them.
Given that the general trend in AI is that inference costs are declining while capability increases, since the production frontier is moving outwards, then investment will probably pay off. Usage of Openrouter in terms of tokens has increased 30x within a year. The top 3 users of tokens there are coding tools. People clearly want AI and they're prepared to pay for it, I see no reason why their revealed preference should be disbelieved.
China has started on Biggest Dam (60 GW peak capacity, or about an entire UK's worth of annual electricity production if it works out): https://en.wikipedia.org/wiki/Medog_Hydropower_Station
Why? The AI buildout is way bigger than anything else that's remotely feasible in the near-term. I'm a fan of nuclear fusion and nuclear generally, I think it'd be great to mine asteroids. Setting up largescale underwater mines would be cool. Doing something in Antarctica would also be good. Anything besides more welfare programs or endlessly increasing health costs, I'd welcome a big investment in anti-aging.
But I also have a sense of scale, AI is the front-page story even if people feel a bit tired of it. People talk about arcologies and they build data centres. The hyperscalers are spending about $200-300 billion on data centres annually. That's about one Apollo program every single year. The Medog hydro station is supposed to cost a mere $137 billion over 8 years. Even with a 3x blowout that's peanuts compared to AI. Microsoft alone is spending more than that.
What could be more visionary than bringing alien minds into existence? Elon made his fame as a hard-sciences guy with rockets, cars, tunnels but he's moved over into AI because of how important it is.
Come on, are Brits really going to pack up shop and go not to the North of England, not to the Welsh countryside or the highlands of Scotland or even the Falkland Islands... but Antarctica? Infamously uninhabitable Antarctica, with a kilometre of ice covering anything important, with seasonal accessibility, icebergs, vast distances to anywhere else? It's not like there's an asteroid's worth of minerals there.
Vision must be matched by cost-efficiency and prospective gains if it's to be anything but a pipe-dream. The cost of AI development is enormous but the potential gains are staggering. The cost of space colonization is perhaps slightly smaller but the gains aren't so great. While Western civilization underinvests in R&D and capital generally, it should be directed at the most high-leverage targets first.
More options
Context Copy link