site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 335579 results for

domain:twitter.com

I'm generally convinced that at least getting the vaccines was sensible (and pushing their roll-out was good policy, though the compulsion to take them was deeply illiberal), but this data doesn't seem too compelling to me considering the obvious confounders. I would assume Red America to have a significantly larger number of old and unhealthy people with inadequate access to medical care.

It's also possible that we get a lone alien craft who was drawn here following an amateur radio signal after his home planet was destroyed in a nuclear war, which he survived because he was in the orbital guard at the time.

Based on a current understanding of physics, the only reason to launch an invasion would be to acquire the population as human capital for empire building

That sounds awfully confident given that we can make no assumptions about the utility function of the aliens. Perhaps they just want to fuck koala bears and think wiping out humanity is easier than convincing them to leave 95% of Earth's land mass to the koala brothel project.

Perhaps they are the superhappy people, who would simply invade because Earth allows kids to experience pain on the level of stubbed toes.

Perhaps it is just a science fair project on conflict in the early nuclear age.

Even if you assume that any alien life could only have an instrumental interest in Earth, we have a ton of species besides humans. Not that capturing or subjugating a few billion humans will do them much good -- much easier to transmit the human genome and synthesize humans on their home world if they need humans for some weird reason.

>you meet the space elves

>they are hot

>Immediately they start calling you monkeigh

A couple things;

The natural assumption should be that they're making good margins on inference and all the losses are due to research/training, fixed costs, wages, capital investment.

This is a fun way to say "If you don't count up all my costs, my company is totally making money." Secondarily, I don't know why you would call this a "natural" assumption. Why would I naturally assume that they are making money on inference? More to the point, however, it's not that they need a decent or even good margin on inference, it's that they need wildly good margins on inference if they believe they'll never be able to cut the other fixed and variable costs. You say "they aren't selling $200 worth of inference for $20" I say "Are they selling $2 of inference for $20"?

Why would a venture capitalist, who's whole livelihood and fortune depends on prudent investment, hand money to Anthropic or OpenAI so they can just hand that money to NVIDIA and me, the customer?

Because this is literally post 2000s venture capital strategy. You find product-market fit, and then rush to semi-monopolize (totally legal, of course) a nice market using VC dollars to speed that growth. Not only do VCs not care if you burn cash, they want you to because it means there's still more market out there. This only stops once you hit real scale and the market is more or less saturated. Then, real unit economics and things like total customer value and cost of acquisition come into play. This is often when the MBAs come in and you start to see cost reductions - no more team happy hours at that trendy rooftop bar.

This dynamic has been dialed up to 1,000 in the AI wars; everyone thinks this could be a winner-take-all game or, at the very least, a power low distribution. If the forecast total market is well over $1 trillion, then VCs who give you literally 10s of billions of dollars are still making a positive EV bet. This is how these people think. Burning money in the present is, again, not only okay - but the preferred strategy.

Anthropic is providing its services for free to the US govt.

No, they are not. They are getting paid to do it because it is illegal to provide professional services to the government without compensation. Their federal margins are probably worse than commercial - this is always the case because of federal procurement law - but their costs are also almost certainly being fully covered. Look into "cost plus" contracting for more insight.

What evidence points in this direction of ultra-benign, pro-consumer capitalism with 10x subsidies? It seems like a pure myth to me. Extraordinary claims require extraordinary evidence.

See my second point above. This is the VC playbook. Uber didn't turn a profit for ever. Amazon's retail business didn't for over 20 years and now still operates with thin margins.

I don't fully buy into the "VCs are lizard people who eat babies" reddit style rhetoric. Mostly, I think they're essentially trust fund kinds who like to gamble but want to dress it up as "inNovATIon!" But one thing is for sure - VCs aren't interested in building long term sustainable businesses. It's a game of passing the bag and praying for exits (that's literally the handle of a twitter parody account). Your goal is to make sure the startup you invested in has a higher valuation in the next round. If that happens, you can mark your book up. The actual returns come when they get acquired, you sell secondaries, or they go public ... but it all follows the train of "price go up" from funding round to funding round.

What makes a price? A buyer. That's it. All you need is for another investment firm (really, a group of them) to buy into a story that your Uber For Cats play is actually worth more now then when you invested. You don't care beyond that. Margins fucked? Whatever. Even if you literally invested in a cult, or turned your blind eye to a magic box fake product, as long as there is a buyer, it's all fine.

Heh, I'm old enough to have owned a pocket electronic spell checker at one point. The hash table seems the right way to do it these days, but it will take up more memory (640K shakes fist at cloud). And sometimes you do want to scan faster than the user types, like opening a new large file.

I am sorry to ask others to do the leg work for me but I have vague memories of a thread about eugenics on Twitter (2018? very unsure) created by someone (possibly a woman?) that I'm hoping to find again. Their post was a series of polls that had examples and asked readers "do you consider this to be eugenics". Questions were along the lines of "Dave and Adam are a gay couple looking to conceive a child. With gene editing technology they can reduce the chance of their child having a debilitating disease by 90%. Is this eugenics? Is this good? What about a 30% chance?". Other similar questions with different couples and different setups. I'd hope to find these polls again because I remember the questions making me believe that people are for eugenics but they just wont say it. When I sat and actually pondered the questions I almost always ended up saying "yes this is eugenics and yes I support/would do this myself". I want to give the same "quiz" to close friends and see their response/reaction.

I love when people just project their favorite moral frameworks on higher inteligence aliens, no one considers that aliens could be yes chadding highly intelligent speciesism nazis.

As far as I can tell, it doesn't even look as much like aliens as the earlier weird comet!

The sepoys enabled British control over east Africa, and fought the empire’s wars broadly. They weren’t just a local skirmisher force.

But sepoys for controlling east Africa weren't the reason for invading India either, which is the rather more important distinction for Britain's motives for going into India.

We've enough of the historical record recorded to have pretty unambiguous rationales for the East India Company's conquest of India, and 'to get forces to control east Africa' wasn't one of them. The British Empire might have cared about capturing markets for the sake of captive markets, and it absolutely engaged in slavery/don't-call-it-slavery in the process, but it just as definitively did not approach its empire building with the mindset of a Paradox strategy gamer prioritizing pop accumulation. No particular part of the empire was set up for maximizing population value from a government-utility advantage, which is one of the kinder things to say of the British Empire.

As with most imperialist states, hefty cultural chauvenism on the part of the conqueror broadly squandered potential population contributions from subjugated people, as opposed to any real policy of cultivating and extracting, well, high-value human capital.

If you start with the assumption that the well has run dry and LLMs are never (not any time soon, at least) going be much better or much different than they are now, then yeah, very little about the market makes sense. Everyone willing to put substantial money into the project disagrees.

Inference costs are exaggerated (and the environmental costs of inference are vastly exaggerated). It's certainly a big number in aggregate, but a single large query (30k tokens in, 5k out) for Google's top model, Gemini 2.5 Pro, costs about $0.09 via the API. And further queries on substantially the same material are cheaper due to caching. If it saves your average $50,000 a year office drone 30 seconds, it's more than worth it.

Google ends up losing a lot of money on inference not because it's unaffordable, but because they insist providing inference not only for free, but to search users who didn't even request it. (With a smaller, cheaper model than 2.5 Pro, I'm sure, and I'm sure they do cache output.) Because they think real world feedback and metrics are worth more than their inference spend, because they think that the better models that data will let them build will make it all back and more.

But who knows what those models will even look like? Who wants to blow piles and piles of money on custom silicon that might eventually reduce their inference costs by a bit (though, since they were working with RISC-V, I kind of doubt it'd have ended up being better per-watt; cheaper only after licensing costs are factored in, probably) when a new architecture might render it obsolete at any moment? It's premature optimization.

(Granted, GPUs have remained viable compute platforms since the advent of deep learning, but that's because they're not too specialized. Not sure how much performance per watt they really leave on the table if you want to make something just as flexible. Though I have heard lately that NVidia & AMD have been prioritizing performance over efficiency at the request of their datacenter clients. Which I'd read as evidence they're still in the 'explore' domain rather than 'exploit.')

They're probably going to replace several data analysis teams whose jobs have been building Power BI dashboard for the past 10 years.

I consulted for a massive multinational a couple of years ago and they had this massive operation in India that produced those BI dashboards every week, that the regional and national executives immediately threw in the trash.

The issue was that while those dashboards looked good and contained a ton of data they didn't really say anything meaningful and it was too hard to both communicate with and change the workflow of the Indian BI teams so the output became useless.

What people defaulted to instead was just fairly simple KPIs that were relevant for whatever issue at hand and people showing things in excel. The dashboards were occasionally used for official reports and external communication but not for internal decision-making.

I'm not sure which bucket AI would fall into here. Would it enable people to quickly do the work themselves (or some kind of local resource) or will it just be a cheaper version to shit out even more useless graphs and dashboards than the Indians resources?

We theorize about creating self replicating intelligent machines. We are, once properly aligned, self replicating intelligent machines.

This comment makes me feel like there's a scifi story or alternate universe somewhere where humans, on the cusp of inventing AGI, get invaded by intelligent aliens, somehow miraculously defeat them, and discover that raising and reproducing these aliens is actually much cheaper on a per-intelligence basis than building servers or paying AI engineers, leading to AI dev being starved of resources in favor of advancing alien husbandry. Conveniently, the AI label/branding could remain as-is, for Alien Intelligence.

wasn't (and likely isn't)

Sir, you were in a coma and woke up in the future.

Checking the inclusion of an element in a hashtable is a constant-time operation, or at least constant-ish -- you still need to compare the elements so it's gonna be proportional to the size of the largest one. So the limiting factor here is memory. I suspect keeping a dictionary resident in RAM on a home PC shouldn't have been a big deal for at least 25 years if not more.

I think there should be an even longer period where it would be fine to keep the dictionary on disk and access it for every typed word, because no human could plausibly type fast enough to outpace the throughput of random reads from a hard disk. No idea how long into the past that era would stretch.

Technically, UAP just means ‘not identified’, so even if there’s something there a lot of them are probably oddly shaped clouds or equipment errors.

I'm not sure what refusals they're referring to

I imagined the answer was something like this, as you clearly demonstrated the mainstream media coverage of this is exceptionally poor and I can see one of the primary vendors creating the impression impacting my family member with limited associated reality.

Sigh.

Thank you!

The alien society that would say that wouldn’t try to land in the first place, thats my point. They would just launch a relativistic impact or hit us with a gamma ray laser or something.

The sepoys enabled British control over east Africa, and fought the empire’s wars broadly. They weren’t just a local skirmisher force.

The Romans used Greek slaves for intellectual tasks so extensively it was a standard trope; they also used the levies of conquered peoples to capture more territory.

‘Having others to boss around’ is the entire point of an empire. Putin originally wanted to capture Ukraine intact, before that proved impractical and he started turning cities into grozny. The default historical empire has been ‘pay me, send your best and brightest to contribute to my economy and your troops to fight in my army, other than that just stay quiet’.

Slaves don't need to be used purely as manual labor.

Intelligent slaves offer advantages over intelligent free peers. Our insect owners don't have to worry (for a few centuries at least!) about a high level human slave becoming Hive President.

We theorize about creating self replicating intelligent machines. We are, once properly aligned, self replicating intelligent machines.

The responsible mainstream media should be asking itself why Shaun King, Talcum X, Martin Luther Cream, is being handed the opportunity to break this story.

As others have already implied, this study seems to be a vehicle for attracting media attention, rather than a serious attempt at evaluating the impact of LLMs on productivity. "Rapid revenue acceleration"? So we're already excluding anything that is merely cost-saving by replacing employees?

The actual paper is not freely available, so I don't actually know how rigorous their research was. At the very least, it is described as being enterprise only - historically the slowest and least agile when it comes to adopting new technologies. There are basic bitch wrappers that already have billion dollar+ valuations! And if it is focused solely on revenue generation as the benchmark, you will be cutting out a huge swath of projects that involve LLMs.

One might also wonder at timing. While LLMs will seem old news to rats and SSC readers due to familiarity with GPT-2, ChatGPT has only been around since November of 2022: not even 3 years old. And that was GPT3.5, GPT4 only came out in March of '23. Any other technology would be incredible if it drove rapid revenue acceleration in ~15 enterprise deployments after such a tiny amount of time. That's not to mention the yuge problem of AI studies becoming out of date simply because the whole thing moves way too quickly for academia. When was this study completed? Autumn of last year, if we're being generous?

Again, without reading the primary source it would be harsh to jump to conclusions, but based on the article linked this just screams "proactive title finding to get attention" rather than something important to learn about business adoption.

If I’m a government official, I would do my best to downplay or dismiss or classify the story. The reason being that the only real data we have on how humans would react to something like this is the War of the Worlds broadcast in the 1930s, which resulted in a fair bit of panic.

Apparently never happened.

I would broadly agree that glory as a motivation is easier to follow, as it's more inherently rewarding. While love for others is less inherently rewarding and thus a larger sacrifice. Which in turn is why it is MORE good. It is... easy is not the right word... easier to follow glory, to do good things which will give you glory, than it is to do good things which will merely help others but not yourself. Someone who is filled with a desire for glory but not a love for their neighbors might do all kinds of things, and only by sheer coincidence will those things be truly good, while someone who is filled with a love for their neighbors and no desire for glory will live a humble and self sacrificing life doing small amounts of good. Although someone with both will do large acts of good that help many many people, and thus is even better.

A motivation for glory is a smaller, easier stepping stone to reach. A motivation of love for humanity is a greater goal which is much much harder to attain but of greater value if attained.

If Christ’s motivation was glory, both for his Father and for his divine family and for himself, then we would likewise imitate this, and this would lead to glorious moral acts. But if Christ’s motivation was pure and uncorrupted “love for humanity”, then we will only feel a gnawing discomfort at the impossibility of our ever replicating this motivation in any legitimate sense.

It's axiomatic that no human can possibly reach the true goodness of Jesus. We are imperfect sinful humans. So you have to figure out how to not despair at never reaching the goal, and do your best anyway. Again, I think that on a fundamental level there isn't truly a distinction between actions which glorify ourselves, actions which glorify God, and actions which show love to humanity. They're the same actions. There are things which people might define as "glory" which harm people like being a murderous conqueror, but don't give true glory because they are evil and sinful. Ultimately true glory comes from doing the most good. So you don't really have to choose, just do all the good things for all the good reasons. But I think love for humanity, although harder to attain, is harder to corrupt once present. Still possible, but harder. There are fewer examples of actions which superficially seem loving but are actually evil than there are actions which superficially seem glorious but are actually evil. But in the end I think Jesus was motivated by all of them, so imitating him by yourself following all of the motivations seems like a more robust way to do good than following one of them to the exclusion of the others. You're more likely to notice when you're being led astray when the motivations appear to diverge instead of converge like they're supposed to.

I'm not sure what refusals they're referring to, since he answered Epstein questions during his confirmation hearings and again during a House Oversight Committee hearing after it became big news, though I'm not sure if the latter worked the intelligence angle (the former was only five minutes and was unremarkable). He did explicitly tell OPR that he had no information about Epstein being an intelligence asset, though I'm not sure if this interview was under oath. He isn't scheduled to testify in front of the current House committee, but I can't see any information indicating any refusal or reluctance, only that he isn't on the witness list.