site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 282 results for

domain:greyenlightenment.com

Index funds are hard to beat in terms of returns intersecting with zero thinking required. If you think you might enjoy a kinda rude but accurate quant's rationalization for index funds you can watch this.

Chatseek works for R1 0525.

Pretty much nobody could sincerely claim that the Price Force counts.

Why not? Frankly, I just don't believe this whatsoever. This is nothing but an argument from personal incredulity. I guess this is what you're left with after your prior tests didn't work out. There's simply not a single shred of reasoning here.

Creating the Price Press is something the government can do using its ordinary powers.

No.

With that aside, I'm not sure how other people see LLMs tackling problems of this complexity and then claim they're not reasoning.

Both the complexity, and also just the novelty. These LLMs were definitely trained with some stack-based languages, both historic like forth and modern assembly, but as similar as they might be from a conceptual perspective, the implementation philosophy and simple names are drastically different. And while there's a tiny number of HexCasting examples on the open web or on discord, but they universally take different approaches, and some of them (like my screenshot above) simply can't be read by any parser that doesn't already have a good understanding of the language: a completed Jester's Gambit, Rotation Gambit, and Rotation Gambit II are visually identical. And, of course, you don't have to write a spell from top-left-going-right-then-down like an English paragraph.

That's a fun example because it's got an easy third-party evaluation, but you don't have to make up a programming language to do this sorta thing. These problems don't have to be hard, just be being new they undermine a lot of these arguments, and these writers could run them, and just don't seem interested in it.

I'm not sure why you're ?paying for Grok 4. Grok 3 was genuinely impressive, and somewhat noticeably better than the competition at launch. Not the case here I'm afraid.

Yeah, I'm a little surprised. I didn't expect any of the LLMs to handle this great -- I'm kinda amazed that ChatGPT could do as well as it did, and even some of my gripes are probably downstream of the question being underspecified -- but the level of hallucination from Grok4 is disappointing, and I could see the arguments for dropping it.

Partly a politics and work politics thing. My boss is a big booster of everything Musk, so there's been a lot more value, separate from the LLM's specific capabilities, in both knowing the thing and knowing the limits of the Grok4. And while I don't particularly trust xAI, I neither trust nor like OpenAI.

Some of it's use case. I do find even Grok 3 more effective for writing, and reviewing writing than the ChatGPT equivalents. Compare this to this, or at the risk of touching on erwgv3g34's topic today, [this] to this (cw: discussion of an excerpt from nsfw m/m/f text; there's no actual sex or even nudity, but it's very very clearly smut.)

They're all sycophantic, and even where 4o is better at catching spelling and grammar checks, at larger consistency or coherence or theme questions I've had a hell of a time getting any of the early ChatGPT models to really push back with anything deeper than the Your First Writing Advice that amadanb's criticized.

Of course, I also haven't experimented that hard with them, or with the newer paid ChatGPT models. I probably do need to do a deeper and more serious evaluation; I've also just been lazy about actual hard comparisons for fields with strict performance results. My work programming goes into stuff that I'm either unwilling to upload to an outside service or is large enough in scale that models have had problems maintaining logic (or both), while my hobby programming or teaching is mostly simple enough that Grok3 or 4mini can handle it.

I live in a college town. I honestly can't think of a single person right now in real life that I would describe as a hairshirt environmentalist. Online, I can only think of Greta herself and her refusal to take an airplane, and she's a massive outlier because she pretty much uses her influence to bum rides around the world on an eco-friendly yacht. A quick check of Just Stop Oil shows that most of their antics result in 50-ish arrests, which seems like peanuts to me.

Your average environmentalist is a middle class college kid with an iPhone. They aren't giving up much of anything except maybe biking more and eating less meat.

Sending a letter to a random attorney with 1000$ of cash and instructions to forward an encrypted message (or the decryption key) to the media in case of your death is something which could work

To the lawyers in the audience: are random requests like this common? I realize direct anecdotes might be subject to confidentiality, but are these sorts of things heard of?

Did you read the effortpost? At this time Epstein's known victims were 18+ish girls who willingly sought him out to fuck him for money. He doesn't sound like a good person but I can see how it might be hard to get the legal system fully fired up over this. Seemed like the feds couldn't even convince themselves to get involved, understandably.

Also, third paragraph in my reply. My own friend was not a well-connected celebrity and had the softest jail experience imaginable. He was poor, but what he had going for him was that he was likable and not dysfunctionally insane and his crime didn't fit neatly into reprehensible crimes like murder or assault with a deadly weapon.

I think it is an error to believe the legal system will by default inflict the appropriate amount of suffering fit for a crime, especially if the accused has any shred of sympathy and resists at all levels and can pull the right strings.

Huh. I'd always imagined that one way we might get serious about ASI safety would be that non-superhuman AIs might do something Unfriendly enough to spook the normies ... but in this case, the anti-human risk probably reassures people, doesn't it? We optimize chatbots to be as persuasive and addictive as possible, but well before we get to some hypothetical "can wrap anyone around their little finger" point, we're at the "can seduce the most pitiful and low-status people among us" point, and the normal reaction to that isn't "boy, that could happen to me someday", it's "boy, I'm glad I'm not like those people and never will be!"

I wonder how far that generalizes. The Just World fallacy is a tempting one.

I was told by a psychologist that the vast majority of suicidal impulses last minutes or even seconds. The idea is that they don’t have time to seek out a substitute before the impulse wears off. It may appear later in other circs of course.

An Air Force is not sufficiently like either the army or navy to count, so it isn't authorized. Yes, the government could lie and say that it is.

Pretty much nobody could sincerely claim that the Price Force counts. A huge number of people could sincerely say that the Air Force counts. The object level is important.

That's not how Constitutional grants of authority work.

Creating the Price Press is something the government can do using its ordinary powers. The free press clause isn't granting it authority at all.

The free press clause only comes into effect when the government tries to shut it down.

I (non-native English speaker) found ChatGPT's critique helpful with a recent application letter. I will grant you that it was a bit more formal than your choice of text, though -- I did not talk about drinking anyone's bathwater, time will tell if that was the correct choice or not.

Most of its suggestions were minor stylistic things (using a gerund instead of an infinitive in certain phrases, avoiding repetition of word constructs) which seemed to me to be improvements.

I will grant you that an application letter is probably a more central example of most of its training data than that perv diary entry -- it is a continuous text, for one thing. Also, unlike that diary entry, I did not start out with a (presumably well-formulated) draft in a foreign language which I translated to English and then asked GPT to correct my English without access to the original (which from what I can tell is what happened with the diary). Instead, I wrote me thoughts down in English, sometimes awkwardly, and relied on it to put them into a smoother form.

So what would it do to the abortion debate? Would robo-abortions be illegal (since you clearly wouldn't oh god i forgot about fetishes I'm going to bleach my brain have created a pregnancy with a robot unless you intended to have it carried through to term)? But then what does that say, that the sanctity of a robo-vat-fetus is more legally "alive"/protectable than one in a human womb? But somewhere out there is a future where, if it isn't made illegal, some youtuber is repeatedly aborting his pregnant robot for the hate-clicks.

And I think that's the worst sentence I've written in my entire life, but I'm still young. Even without longevity, I'm sure there's time for me to write worse.

I think it's more the raising of children that's the bottleneck at the moment.

I've heard (but not confirmed) that removing one suicide method (eg. putting fences on a bridge) reduces the total number of suicides by the marginal amount blocked by that intervention. In other words, there are bridge-jumping-suicidal people and pill-taking-suicidal people, but not suicidal-by-any-method people that would substitute one method for another.

I am very doubtful about that. Some of the suicides are likely by goal-oriented people following a long-term plan (e.g. in a MAID-like context), and for these I would expect substitution effects.

Even for spontaneous suicides, I think that there is some minor substitution effect. If a person had the worst day of their life and would jump off a bridge if not for the fact that it was fenced, I would expect at least a 20% chance that another convenient method (access to a tall building, a firearm, drugs) will present itself and be taken before they feel less suicidal.

A Price Force is not sufficiently like either the army or navy to count, so it isn't authorized. Yes, the government could lie and say that it is.

An Air Force is not sufficiently like either the army or navy to count, so it isn't authorized. Yes, the government could lie and say that it is.

That's pretty easy to just state ipse dixit. But there's something missing that I would call "reasoning". So far, when we've tested your reasoning, it has led to many more questions that you've consistently refused to answer.

once the Price Press exists

That's not how Constitutional grants of authority work. At all. Honestly, if this is your understanding of the Constitution, there's probably not much more value in me continuing this discussion.

That seems possible as applied to state government expenditures (likely subject to federal rules like the one in question, subject to future court rulings).

We never did get a ruling on California's attempt to boycott several red states, which at least seems related. But in a world in which the court accepts Wickard, I suspect the feds would win both the domestic and foreign state expenditures questions if it makes it to court.

I follow JimDMiller ("James Miller" on Scott's blogs, occasionally /u/sargon66 back when we were on Reddit) on Twitter, and was amused to see how much pushback he got on the claim:

If I can predict what I doctor will say, I have the knowledge of that doctor. Prediction is understanding, that is the key to why LLMs are worth trillions.

On the one hand, it's not inconceivable that LLMs can get very good at producing text that "interpolates" within and "remixes" their data set without yet getting good at predicting text that "extrapolates" from it. Chain-of-thought is a good attempt to get around that problem, but so far that doesn't seem to be as superhuman at "everything" as simple Monte Carlo tree search was at "Go" and "Chess". Humans aren't exactly great at this either (the tradition when someone comes up with previously-unheard-of knowledge is to award them a patent and/or a PhD) but humans at least have got a track record of accomplishing it occasionally.

On the other hand, even humans don't have a great track record. A lot of science dissertations are basically "remixes" of existing investigative techniques applied to new experimental data. My dissertation's biggest contributions were of the form "prove a theorem analogous to existing technique X but for somewhat-different problem Y". It's not obvious to me how much technically-new knowledge really requires completely-conceptually-new "extrapolation" of ideas.

On the gripping hand, I'm steelmanning so hard in my first paragraph that it no longer really resembles the real clearly-stated AI-dismissive arguments. If we actually get to the point where the output of an LLM can predict or surpass any top human, I'm going to need to see some much clearer proofs that the Church-Turing thesis only constrains semiconductors, not fatty grey meat. Well, I'd like to see such proofs, anyway. If we get to that point then any proof attempts are likely either going to be comically silly (if we have Friendly AGI, it'll be shooting them down left and right) or tragically silly (if we have UnFriendly AGI, hopefully we won't keep debating whether submarines can really swim while they're launching torpedos).

And yet, environmentalists act as if they have 100% confidence, and they commonly reject market solutions in favor of central planning. The logical deduction from this pattern of behavior is that the central planning is the goal, and the global warming is the excuse. It is not bad argumentation to say to the environmentalist, "you are just a socialist that wants to control the economy, and are using CO2 as an excuse" because a principled environmentalist would never bother raising a finger in America. They'd go to India and chain themselves to a river barge dumping plastic or go to Africa and spay and neuter humans over there. If you are trying to mess with American's cars, heat, and AC, its because you dont like that Americans have those things, because other concerns regarding the environment have been much more pressing for several decades at this point, and that isn't likely to change.

This is a failure of theory of mind.

As a general rule, when there's a situation where person A insistently tries to solve problem B with method C rather than more-effective method D, the conclusion "A is secretly a liar about wanting to solve problem B and just wants to do method C for other reason E" is almost always false, outside of special cases like PR departments and to some extent politicians. The correct conclusion is more often "A is not a consequentialist and considers method D sinful and thus off the table". "A thinks method D is actually not more effective than method C" is also a thing.

So, yes, a lot of these people really are socialists, but they're also environmentalists who sincerely believe CO2 might cause TEOTWAWKI. It's just, well, you actually also need the premise of "sometimes there isn't a perfect solution; pick the lesser evil" in order to get to "pursue this within capitalism rather than demanding we dismantle capitalism at the same time", and a lot of people don't believe that premise.

Just have the robot waifu be able to bear children. Then we wouldn't need ELON money to have 30 children. If anything, this might actually fix the demographic collapse

Edit: I replied to a child of a comment, thinking it was a direct reply to me. Oops.

I'm not sure which Texas law you're refering to? I consider it an effect, and not a cause. Did the deleted comment imply that everything is downstream from a new Texan law? I admittedly can't defend such a position, I'm just pointing out a pattern with the belief that no explanation makes it any less concerning.

'harmful to minors' is so subjective that whoever has the most power can make it apply to everything that they're against. The label has not had anything to do with what is literally harmful to minors for like 20 years now.

Anyway, Steam and Itch.io have already been hit by censorship (though Itch seem to have gotten some of the games back). ID laws are already gaining traction. I've already had purchases refuted by Paypal because of reasons which are false, but the sort of false where people are afraid of arguing against them because it will make them appear immoral. This is either censorship from many different causes in rapid succession, or it's a coordinated attack on human freedom by somebody with enough power to get multiple countries and multiple major payment processors on their side.

a story where Shinji agrees to undergo conversion therapy in order to cure his homosexuality

Good news: I am very interested in your comment. I'm taking notes.

Bad news: A psychiatrist is very interested in your comment. And the notes are on NHS mental health branded stationery.

Truly, I have been ignorant of the depths of depravity, I am happy to concede that I'm using these models the wrong way now.

I genuinely can't see myself using any LLM in this manner, but I consider it an entirely legitimate and harmless use case. I'll go hang out with Voltaire when he says that he doesn't understand what would compel you to do this, but we'll defend your right to try. You have done a great service. We are all more informed, and perhaps slightly more damned, for your efforts.

I’m afraid that comment removed the last shred of credibility you might have had. Either you are trolling or are very, very confused.

In case it’s the latter: next token prediction allows for surprisingly sophisticated outputs despite the simplicity of the training. This is because of the sheer scale of both parameters and data. LLMs can have hundreds of billions of parameters and are trained on trillions of tokens. These raw models are powerful but hard to control, so they are almost always fine tuned with a much smaller dataset. But yes, these abilities (generating correct python scripts, playing chess) arise purely from next token prediction and the sheer scale of these neural networks, without the need for an “intermediate layer”.

Historically you wouldn’t have been alone in being skeptical; even five years ago this was controversial, see the Scaling Hypothesis by Gwern five years ago back when this was debated.

Panicking about Texas laws in ways that make it clear the panicked did not read it is a thing that happens all the time and is not unique to porn bans.

I believe I still remain ignorant as to whether you think the establishment of a Price Force is authorized by your reasoning.

A Price Force is not sufficiently like either the army or navy to count, so it isn't authorized. Yes, the government could lie and say that it is.

There is no freewheeling grant of authority to establish a Price Press

But there's a freewheeling rule which says that an already existing press gets freedom of the press. So once the Price Press exists, freedom of the press applies to it. Laws or orders that shut it down are unconstitutional.

About a week ago, a user here posted that Epstein's status as a Mossad agent was pretty much an established fact at this point,

Just for the protocol: 4 weeks ago 2rafa effort-posted that it is very implausible that Epstein was propped up by Mossad or that Israeli Intelligence was using him to blackmail people:

https://www.themotte.org/post/2240/culture-war-roundup-for-the-week/345489?context=8#context

If you were Mossad and wanted to blackmail people ambivalent or hostile toward Israel into supporting it, you'd target rich Chinese, Indians, gentile Russians, and above all rich Sunni Muslims, particularly in the Gulf. You would not target Alan Dershowitz. The blackmail argument betrays a fundamental lack of understanding of the basic purpose of blackmail. It also betrays an understanding of diaspora Jewish politics and Mossad's influence over it. Most critically, those rich Americans who were more skeptical of Israel do not appear to have associated much with Epstein (likely because that isn't really their crowd). Epstein bragged about working for intelligence agencies; that is the one thing you don't want your agent of blackmail to be doing.