site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 349076 results for

domain:bracero.substack.com

I got it, just going for low comedy.

You've reached a level of shitpost where I can't tell if you're ironically saying that WWII was civilized or sarcastically implying that it was not. Would you mind clarifying that a bit so that I can understand you better?

I think that one aspect is the question which performance you actually require from the model.

A fundamental difference between free / open source software and open weight models is that for software, the bottleneck is mostly developer hours, while for models, it is computing power on highly specialized machines.

For software, there have been large fields of application where the best available options are open source, and that has been the case for decades -- for example, try even finding a browser whose engine is proprietary, these days. (Of course, there are also large fields where the best options are all proprietary, because no company considered it strategically important to have open source software, nor was it a fun project for nerds to play with, e.g. ERP software or video game engines.)

For LLMs, tens of billions of dollars worth of computing power have to be sacrificed to summon more powerful shoggoths forth from the void. For the most part, the business model of the AI companies which produce the most advanced models seems to be to sell access to it. If Llama or DeepSeek had happened to be more advanced than OpenAI's models, their owners would not have published their weights but charged for access. (The one company I can imagine funding large open-weight models would be Nvidia, as part of a commodize your complement strategy. But as long as no AI company manages to dominate the market, it is likely more lucrative to sell hardware to the various competitors than to try to run it yourself in the hope of enticing people to spend more on hardware than on model access instead.)

That being said, for a lot of applications there is little gain from running a cutting edge model. I may be nerdier than most, but even I would not care too much what fraction of IMO problems an AI girlfriend could solve.

And I'd feel bad responding to a long text, just to discuss a minor part of it, as that's just nitpicking or directing the conversation towards what interests me but which may not interest the other person.

Please don't feel bad about that. I frequently find that the most interesting content is 1 - 2 branches off of a top level post.

Left wing populism is personally advantageous for everyone who does not have so much wealth they never need to work again.

This is straightforwardly not true. State owned businesses perform poorly. Europe which has much more left populist crap is a decaying retirement home. Like most populism leftwing populism is very specifically selected for what scratches the grievance hindbrain of the most people listening to just so stories about how homelessness is really caused by the fact that Bezos has a really big yatch.

God damn man! What do you use it for if not llms?

It would just be a re-run of the Soviet-Afghan war.

Isn't this limitation a part of the map rather than part of the territory? Language is limited, logic is limited, math is limited, etc, but reality doesn't particularly care about the mental jails which we create. I disagree with your earlier comment that understanding aspects of the world in depth is impossible, but I do believe that knowledge alone is insufficient. A condition you might accept for "understanding aspects of the world" is being able to predict the future, and some great people of the past have made eerily good predictions (I believe Tesla predicted phones and computer monitors, and Nietzsche predicted communism and its death toll. Less impressive works are ones like 1984, but that still requires a good intuition to notice an approaching problem before others). Maybe it seems like a nitpick, but my claim is "0.01% of people have a solid understanding of some aspect of the world", and with how statistics work, the vast majority of people who claim to have these abilities are wrong.

I hope you get to experience something which breaks your models of what's possible. It's a refreshing experience and a great blow to limiting beliefs

I think it depends on the flavor of Protestant. If you’re talking about low church Bible thumping evangelicals, I get it, but I think most high church Protestants respect the councils and the dogmas of the early church. The Anglo Catholic movement actually accepts the dogmas and canons of the first seven councils so they’d be pretty in line with the Roman Church and the various Orthodox Churches. Lutherans still informally accept quite a bit of that dogma through the Augustine Confessions and Book of Concord.

This has been ongoing for far longer than that. Tristan Harris's TED talk outlining how he as a google employee explicitly aimed to manipulate you to maximize your "Time On Site", came out in 2016, and his original internal talk on the subject dates back to 2013.

A few additional data points:

  • Twitter started phasing out the chronological timeline in favor of the engagement algorithm in February of 2016.
  • Instagram switched from chronological feeds to "best posts first" in the summer of 2015.
  • Facebook as of 06/15/15 acknowledged that news feeds action on how long a user reads a feed item, as well as how likely they are to engage with it.

Trying to defund woke by instead funding creationism honestly seems like a pretty likely outcome!

I figured it was a bit of a joke, heh. It's ok I'm a strange Christian. Even amongst the Orthodox. I've made peace with that fact.

You know what would work even better? Destroying academia and burning the university system to the ground.

Declare war on college. Remove all government funding of higher education. Stop issuing student visas. Make hiring on the basis of degrees illegal.

College delenda est. This will not end until the last university building is reduced to ashes, the last college campus has been sowed with salt, and the last tenured professor's head sits on a pike.

Or is it a scam? Are its promised rights lies?

Whether we have a right to life, liberty, the pursuit of happiness, guns, free speech, or whatever else seems pretty clearly to be an axiomatic moral argument, not a factual one. Ought, not Is.

This comment from two years back lays out what I think is a pretty solid argument for the nature of the problem:

Do you believe that it's practical to build and enforce a set of rules that ensure acceptable outcomes so long as they're followed, regardless of the behavior of those operating under the rules? Put another way, do you think loopholes are a generally-manageable problem in rule design?

And the answer elaborated in the rest of the comment comes down to explaining why loopholes are in fact not a generally-manageable problem in rules design. The lie and scam comes from the idea that you can write down a legible definition of rights, and then right down a legible set of rules about how to adjudicate disputes over them, and then by following these rules the rights will be secured, and thus the processes and outputs we observe are simply The Way The Rules Are. Our society is built on the idea that rules work this way, but they really don't.

You can make a set of rules that work when people generally want them to work. Making a set of rules that work when people don't want them to work is probably impossible. Incompatible values results in a lot of people not wanting the rules to work any more, so they don't. There's a term that Moldbug came across awhile back: "manipulation of procedural outcomes". It's one of the most perfect political terms I've ever encountered, and the perfect encapsulation for the nature of the problem. Rules, procedures, exist to secure outcomes, but can be manipulated. Once you grok that, everything else follows with the crushing inevitability of a glacier.

I think you're missing the point. If you wanted to talk to your mother, would I be okay with deciding what you were allowed to say? Would Google? The government? As far as I'm concerned, nobody has the right to hinder communication between anyone else. The fact that Google can even read my emails is already a disaster, and I'm quite sure reading your physical mail is highly illegal, and that the reasons behind this decision aren't invalid for digital mails.

The one who listens has as much freedom as the one who speaks

This sounds like the freedom of association? I like that concept. What I dislike is when companies try to decide who I can associate with, as well as who can associate with me.

The internet didn't work like this before the fallacy of association began. The form of the fallacy is "If illegal content ends up on Google, Google is guilty" or "If a person writes a slur in your game chat, your game is guilty", "If you're friends with a sexist, you're likely a sexist yourself", etc. You might have heard other versions of it, like "Roblox is guilty because pedophiles use it" and "Guns should be illegal because criminals use them". The idea is sometimes mocked as "Hitler drank water once, therefore you're a nazi for enjoying water". I believe that a large chunk of all conflict in the world, and the biggest reason that ideological bubbles have become such a problem, is this very fallacy.

Likewise letter #29 (1938) where he "should regret giving any colour to the nation that [he] subscribed to the wholly pernicious and unscientific race-doctrine", or letter #100, where he confesses, "I know nothing about British or American imperialism in the Far East that does not fill me with regret and disgust", or the way in letter #71 he speaks of having "a curious sense of reminiscence about any stories of Africa, which always move me deeply" (he was born in South Africa), and so on.

There is a very strong tendency for fans of Tolkien, in the public sphere, to misrepresent what he said and believed, or just ignore it. From more left-wing or progressive sides it tends to be about avoiding the way that he was a devout traditionalist Catholic with everything that implies about things like social policy, sexuality, or gender. From more right-wing sides it tends to be about the way he was equally anti-imperialist and anti-racist.

Perhaps more relevantly to the current discussion, one part of his writing that springs to mind is from The Two Towers, in the confrontation with Saruman. Théoden's response to Saruman's offer of peace runs thus:

...we will have peace, when you and all your works have perished – and the works of your dark master to whom you would deliver us. You are a liar, Saruman, and a corrupter of men’s hearts. You hold out your hand to me, and I perceive only a finger of the claw of Mordor. Cruel and cold! Even if your war on me was just – as it was not, for were you ten times as wise you would have no right to rule me and mine for your own profit as you desired – even so, what will you say of your torches in Westfold and the children that lie dead there? And they hewed Háma’s body before the gates of the Hornburg, after he was dead. When you hang from a gibbet at your window for the sport of your own crows, I will have peace with you and Orthanc. So much for the House of Eorl. A lesser son of great sires am I, but I do not need to lick your fingers. Turn elsewhither. But I fear your voice has lost its charm.

I think the aside is significant. Intelligence or technical wisdom does not confer any right to rule another. Even if the other appears, to outside eyes, to be ruling themselves incompetently or in squalor, superior wisdom does not create a right to dominate. No, not even for the other's own good.

Is your argument that Dresden or Hiroshima were civilized?

Reversed stupidity is still not intelligence.

In this case, I’d also expect it to be a flagrant 1A violation in a way which the status quo is not. But I’m not up to date on my “viewpoint neutrality” jurisprudence, and I’d be quite willing to believe that there’s some awful precedent here. It would still be an extraordinarily petty, expensive, short-sighted way to imitate the Cultural Revolution.

The shopping mall analogy turned out to be a poor choice on my end, I meant the feeling of the space itself, not any of its functions. A bar, a mall, an airport, a school.. They all feel public, in a way that your bedroom, or a house in the forest 50 miles from any other civilization does not.

Neither problem can be helped, I think. Some people (high IQ non-conformists) are 10 times less common today than they were in the past, so asking for a community with a higher concentration of them than this is a rather unrealistic demand. Also, this website cannot improve much further than this, because the surrounding world wouldn't allow it to be much more based than it is already. As the world gets more connected, the difference between everything decreases, on every scale (cultures, countries, websites, people, ideas, genes, you name it), and then it faces "pressure" from the outside to the extent that it is different. So if your surroundings degrade, you degrade as well. Fighting this is like keeping your house cold in the middle of the summer, or trying to keep a child from learning any swear words.

What I seek may become possible if/when web3 becomes a thing

I have a Ryzen Max 395+ with 128GB RAM and it runs pretty well; granted I don't use it for LLMs but the humongous amount of RAM is useful more often than one might think.

Ken ham

Well, certainly not him, he's Australian. I'm sure we can find some American creationists. Maybe Anthony Watts would like a sinecure in the Climate Studies department.

I don't understand how you say this doesn't work. It obviously has worked in the recent past!

Sorry, how has it obviously worked to reduce the ability to use the federal government as a weapon against universities? How has this strategy obviously worked to actually fix something at universities? I'm not following.

The thing no one seems to be talking about with respect to AI is how the underlying economics of it all are so mind-numbingly bad that a crash is inevitable. I have no idea when this crash is going to happen, but if I had to fathom a guess it will be some time within the next five years. We're talking about a technology that has already burned at least half a trillion dollars and has plans to burn another half trillion with no model for profitability in sight. There's only so long that the flow of venture capital will keep coming before the investors start expecting some kind of return. Add in the fact that Nvidia currently represents about 8% of the total value of the S&P 500 based on sales of graphics cards to a single, unprofitable company, and the economic picture looks even more dire.

I think that the underlying problem is that they're trying to run an enshittification model on an industry where the path has typically been the exact opposite. Look at computers themselves. When computers were first invented, they were limited to institutional uses by governments and large universities, and were subsidized through R&D budgets that weren't relying on profitability, i.e. as a present expense rather than a credit against future earnings. Then large corporations started using them. When personal computers were developed in the late 1970s, they were mostly used by businesses, and in the consumer market they were expensive machines for the tech-horny. As costs came down, more and more households began using them, and by the time they became ubiquitous at the end of the 20th century it had been 50 years since their invention, and they still weren't exactly cheap.

Now imagine an alternate timeline where IBM decides in the 1950s to build several large computers in cities all across the country, enough that they can let every Tom, Dick, and Harry run whatever programs they want for free, all the way down to middle schoolers doing their math homework, with minimal wait time. And of course they're offering on-site programmers so that you don't actually need to know anything about computers to be able to take advantage of them, and they're convinced that after doing this for years people will be so enamored that they'll eventually start paying for the privilege. You'd have been laughed out of the board room for making such a suggestion, yet this is roughly the state of the AI business model.

AI cheerleaders will point to other tech companies that lost tons of money in their early years, only to later become behemoths. Uber is often cited as an example, as they spent more than a decade losing money before becoming profitable. But there are two big differences with Uber. The first is that they were actually responding to a market need. Outside of a select few cities like New York and Las Vegas, taxi service in America was at best, inconvenient, and at worst, nonexistent. They successfully discovered an unmet demand and developed a service to fill that demand. No one was ever speculating on what Uber would be used for they way they are with AI, and from their launch they provided the exact service people expected that they would provide. The second, more important reason, is that Uber never gave away their service for free. Okay, maybe there were some promotions here and there, but by and large, if you wanted to get an Uber, you expected to pay for it. There was never any ecosystem where Uber was providing free transportation for everyone who wanted to get from Point A to Point B with the expectation that people would ditch their cars and get charged through the nose later.

Even companies like Spotify that started with free models and were unprofitable for a long time didn't have quite the same issues as OpenAI has. In 2016, the earliest year for which we have financials, Spotify's loss was about 20% of revenue. By 2018, the first year it was public, that had dropped to 1%, and stayed in that neighborhood until the company became profitable. OpenAI's loss last year was in excess of 100% of revenue, and is on pace to be nearly 70% this year, and that's after record revenue growth. And next year they're going to be on the hook for the first round of the 5-year, 300 billion deal with Oracle. Spotify has also had about a 25% conversion rate from free to paying customers throughout most of its history, though that's recently jumped to over 40%. ChatGTP currently has a conversion rate of around 3%. And Spotify at least ran ads on its free platform whereas free ChatGTP is pretty much all loss for OpenAI, and even the paid version lose money on every query.

So what we ultimately have then, is a company that loses a lot of money, is available for free, has a poor conversion rate for paid versions, and is selling itself as a product you didn't know you needed rather than filling an obvious demand. the leading company has already committed to spending several times more than the company has raised in its entire existence within the next five years, and they need their revenue to dectuple in the next four years to break even. They're also involved in a weird money-go-round situation with Nvidia and Oracle that's 100% reliant on them finding investors willing to lend them the GDP of Finland. And now they want to add video, a notoriously difficult thing to process even when you don't have to make the entire composition from scratch. Color me skeptical that this will be around in five years in anything approaching what it looks like today.

This strategy doesn't do anything to reduce the ability to the use the federal government as a weapon against universities. It doesn't do anything to actually fix anything with universities,

Except it does. Look at my example. The Obama administration was able to launder federal resources into NGO's that could continue to pursue their policy goals long after a Democrat was out of the White House. That's how you do it. Instead of having the Trump administration sue Universities for being racist against white people, you fund right wing NGOs by any means nessecary, and then they can pursue your policy goals long after the government has changed hands back to the opposition. That way when Pete Buttigeig takes office in 2028, he can't just have the Attorney General drop all the cases the Trump administration had ongoing. It's no longer in his hands. It's being done by (for example) Turning Point USA with a 100B warchest funded by structured settlements Pam Bondi forced on universities.

I don't understand how you say this doesn't work. It obviously has worked in the recent past!

I don't see a big difference in being hostile to myself, and others being hostile to me. Self-censorship happens because the brain doesn't consider certain actions to be safe, and as long as you cannot convince it otherwise (get rid of the belief), you won't be able to do said behaviour without an altered state of mind. If you simulated a universe with just a single human being in it, I don't think concepts like shame, embarassment, judgement, "being cringe", prosecution, etc would exist.

And even if you can talk about anything, can you be yourself? Can you write emojis like "^.^" without feeling extremely uncomfortable?

I don't want to sound ungrateful that this space exists, but it has nothing on the old internet. You could probably talk about both of these subjects on the Gaia Online of 2010 and people would just think you were silly. I don't know about the old Club Penguin and Habbo Hotel, but likely those too. In the past, you could only get banned by breaking the rules. If you didn't break any rules, basically anything went, even if everyone hated you. This changed around 2011 or so. This is likely why subreddits like "cute dead children" existed until around that time.

Calling out jews or wanting to be a woman is acceptable to maybe 10-20% of the population, that's a lot. That you think either of these are weird seems to prove my point. And I agree with Arjin below, who knows how many bits of identifiable information exists in the typos that I consistently make? Most freedom enjoyed in the modern society is freedom through obscurity