@FeepingCreature's banner p

FeepingCreature


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 00:42:25 UTC
Verified Email

				

User ID: 311

FeepingCreature


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 00:42:25 UTC

					

No bio...


					

User ID: 311

Verified Email

I'm just saying that inasmuch as LLMs are weak specifically at targeting objective metrics like performance, self-play should improve it. I'm not saying self-play is the panacea that'll give us AI, just that it will fill a hole in the existing methods.

LLMs cannot improve from self-play. Once we get that, I don't know what will happen, might be direct-to-singularity, might not, but that issue shouldn't be a problem anymore.

ChatGPT won't write trash Python when it's had a million years of experience with performance tests.

"Let justice reign, though the heavens should fall on my own head" is I think an underrated sentiment. He who prays for mercy fears an excess of justice. But the distance between the natural state and pareto optimal justice spans a great degree of judgment.

I think you underestimate the extent to which people used to live where they worked.

Thank you!

Anonymous sources and a lack of corroboration. I think it's plausible, but this article shouldn't shift your belief much.

Also, previously discussed here.

If Russia profits from the pipeline existing, it cannot also profit from it not existing. You're arguing that Nordstream was a marginal cost to Russia over not-Nordstream? Then why'd they build it?

"No human being is illegal" as a phrase against "illegal immigration" is a fully general argument against calling any activity illegal.

It's a good argument against the term 'illegal immigrant', I guess, but I'm not sure if "irregular immigrant" is an improvement.

To expand my point, I think there is a smooth continuity between "babbling" and "conveying meaning" that hinges on what I'd call "sustained coherency". With humans, we started out conceptualizing meaning, modelling things in our head, and then evolved language in order to reflect and externalize these things; we (presumably) got coherence first. AI is going the other way: it starts out swimming in a soup of meaning-fragments (even Markov chains learn syllables), and as our technology improves it assembles them into longer and longer coherent chains. GPT-2 was coherent at the level of half-sentences or sentences, GPT-3 can be coherent at levels spanning paragraphs. It occasionally loses the plot and switches universes, giving up on one cluster of assembled meaning-fragments as it cannot generate a viable continuation and slipping smoothly into another. But the "sort of thing that it builds" with words, the assemblage of fragments into chains of meaning, is the same sort of thing that we build with language. It's coming at the same spot (months/years-long sustained coherency) from another evolutionary direction.

You may argue "it's all meaningless without attachment to reality." And sure, that's not wrong! But once the assemblage operates correctly, attaching meaning to it will just be a matter of cross-training. (And the unsolved problem of the "artificial self", though if ever there was a problem amenable to a purely narrative solution...)

Far bigger proportion of people who enjoy such pornography report being gender dysphoric than the general population. There's likely a connection.

How is the ratio in comparison to an equally niche subgroup? Gen pop is not the right comparison here, I don't think.

Also, links please.

I disagree.

Can you give an example that you think illustrates your point well? (I don't have ChatGPT access. Giving out my phone number? Ugh.)

I think it makes those kinds of slips, which to me just means it has imperfect understanding and tends to bullshit. But it doesn't universally make those kinds of slips; it gets chair-person type relations right at a level above chance. Otherwise, generating any continuous run of coherent text would be near impossible.

It would be exceedingly strange for it to generate "the chair sits on the person" at the same rate as its converse, considering that "the <thing> <interacts> the <person>" is vanishingly rarer in its training corpus than "the <person> <interacts> the <thing>". But that sort of generalization requires some abstract model of "thing", "person" and "interact". For it to not pick up that pattern would be odd - why would that be the pattern that stumps it, when it can pick up the categories just fine?

But GPT-3 clearly has that understanding. I mean, obviously not always, but also obviously sometimes. By and large, GPT-3 does not actually tend to assert that chairs sit on people.

Looking for a pattern in GPT jokes is difficult because GPT is very bad at jokes. Who knows what confused and wrong theory of humor it has learnt?

(Helpful reminder to the thread that even off Reddit, we still don't downvote for disagreeing!)

I don't think so? For instance, if your boss asks you if he can give you $1000, it's still problematic. Getting financial support from a supervisor may be bad for you emotionally for various reasons, even though it's very hard to see it as morally bad. So I think there's a strong point that the argument holds regardless of the moral quality of the act in itself.

One person says "X will never happen". Another person says something that may be interpreted as "When X happens, you bigots will deserve it." This means nothing, unless you fall to the old temptation of treating the statements of all outgroup members as being coordinated.

When One refuses to notice the existence of Another or treats you as crazy for believing that Another said something that may be considered representative, it's a mite insulting.

Most criminals are stupid; most crimes are simple.

I believe in the ability of government to deadlock itself on contentious issues. That aside, this is how things used to work - labor regulation, health and safety, disabled access etc. That's the sort of world I want to go back to.

I'll take that trade. Companies should preferentially use American labor due to a law passed by Congress (or a state legislature or a local ordnance), if at all.

I am fundamentally against companies upholding moral values. I think it's a societal declaration of bankruptcy and corrosive to democracy, and I think it should be outlawed. I want my companies to be amoral profit-maximizers. This idea we have that we can tame companies when we already have a nice, central mechanism for arbitrating moral questions (elections, rule of law) just ends up recreating democracy but worse in every way: less equal, less regulated, less principled, less consistent, more corrupt, more vulnerable to extremism, and so on.

People like looking at porn. We can quibble about the meaning of harm, but "People's desires going less fulfilled" is at least some kind of downside, and I think "without any harmful effects" overreaches. The absence is the harmful effect.

Seemed plausible for the R values we were seeing back then though.

What's wrong with "flatten the curve"?

This showed up in the "TheMotte needs your help" poll.

I was forced to declare it "neutral" because "high standard of evidence while politely catfighting another poster; this entire thread should be nuked; what the fuck are the mods doing" was sadly not available.

I'm genuinely not sure how I'm supposed to rate comments like this. It's hostile speculation about another poster's state of mind, in a thread that seems entirely dedicated to fighting out Dean and ymeshkout's mutual antagonism by consent of a good fraction of the board. It's like everyone's decided "screw the rules, we're turning this thread into a fighting ring." I would say "deserves a warning", but nobody here doesn't, this comment included, and it doesn't particularly deserve a warning more.

I would like to request a "Nuke this entire thread from orbit" poll option. There's a level of mess where opping individual comments simply isn't viable.