@FeepingCreature's banner p

FeepingCreature


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 00:42:25 UTC
Verified Email

				

User ID: 311

FeepingCreature


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 00:42:25 UTC

					

No bio...


					

User ID: 311

Verified Email

I see no reason why biochemistry should not be able to produce consciousness, agency, thought and qualia. In the modus-ponens-modus tollens sense: "clearly they can, because they do." Where is the actual contradiction?

Don't multiply entities beyond necessity. Clearly brains have something to do with qualia. Why not "A causes B"? Why should I look beyond this intuitively obvious structure?

I mean, this just seems like it totally gives up on deciding what sort of impact climate change itself will have on your country.

One would think, maybe rather optimistically, that environmental policies are, well, about the environment? Surely the actual environment has to come into it somewhere. Instead, this sounds like the only question is the political struggle - for the sake of itself.

I remember reading a take about AI safety that posed the debate entirely about a "war of vibes", about whether which side would sound more convincing and achieve more mindshare. I wanted to grab this person by the shoulder and yell, "we are talking about physical objects! Objects that will actually exist!"

3 to 4, or 3.5 to 4?

No. Nobody checks. Easily, apparently. No, they're just doing their jobs.

(Without any tree search.)

Like, yes? I mean, I feel as a species we should have learnt this when we understood what stars are. We are not special, we are not privileged, we are the dumbest species that could possibly build a civilization. We should pray for a higher ceiling. Imagine if this was all there was to it!

Sure, if you design a calculator to convincingly imitate human outputs, I'll say the same thing about it.

At this point, most of the really fun things I intend doing are post-singularity, and I don't really emotionally care if I die, so long as everyone else dies as well. So in a very strange way, it balances out to a diffuse positive anticipation.

Everyone who tries an LLM wants it to do something for them. Hence, nobody will build an LLM that doesn't do anything. The sales pitch is "You can use the LLM as an agent." But no agent without agenticness.

Building an AI that doesn't destroy the world is easy. Students and hobbyists do it all the time, though they tend to be disappointed with the outcome for some reason. ("Damn, mode collapse again...") However, this is in conflict with making ludicrous amounts of cash. Google will try to develop AI that doesn't destroy the world. But if they're faced with trading off a risk of world-destroying against a certainty that their AI will not be competetive with OpenAI, they'll take the trade every time.

If DM/OA build AI that pursues tasks, and they will (and are), it will lack the human injunction against pursuing these tasks in a socially compatible way. Moonshine-case, it just works. Best-case, it fails in a sufficiently harmless way that we take it as a warning. Worst-case, the system has learnt deception.

Ultimately everyone eats the same food (energy).

It's the notion of power/safety seeking as a "flaw" that is the human trait here. Humanity aside, it's just what you'd do. Almost any task is pursued more effectively by first removing threats and competitors.

If the world is still here in five years I'll publically admit I overestimated the danger. If it's still here in two to three years, I'll already be pleasantly surprised. In my books, we're well on schedule to short takeoff.

And pray tell, what evidence would that be?

Well, if I hit somebody on the head it tends to impact their conscious processing. Similarly, if I jam an electrode in somebody's visual nerve it tends to have a pretty direct effect on their qualia. And the various other kinds of brain damage to specific regions with repeatable effects on particular kinds of mental operations.

Then you don't know if it's happening or not. You're just guessing.

Even before we understood gravity we saw that objects fell. Knowing that something is happening is generally easier than knowing how, and usually predates it.

If qualia and consciousness are a thing that the brain does, which all available evidence suggests, then there is no reason they shouldn't happen in large language models.

We may not necessarily understand why or how, but clearly that doesn't stop them.

I mean yeah? I'm pretty happy with my mental health, I don't see an urgent need to improve it. If my mental health was in the shitter, I'd keep church in the back of my mind.

(That's assuming it is causal, which I think would be hard to demonstrate.)

for that matter atheists don't become Catholic when you show them the data that prayer and church attendance does have a positive impact on your psychological health

One of these things is not like the others: atheists don't necessarily disagree with the data. If you show a non-HBDer twin studies, presumably they'll try to disagree with them because they agree that to believe in the worldview modelled by the studies would "compel" them to become a HBDer, which they don't want for social censure. Ie. there are preferences attached to their beliefs. But if an atheist believes that Christians are more mentally healthy, this does not compel him to believe in God. Why would it? I mean, it's absolutely a value difference, but it's a value difference that isn't hooked to that part of the world model.

I guess you could interpret it as a compass whose definition of "north" is where the needle points, which is always in the same direction relative to its body. The mechanism adds no value, so "aligned with compass north" is equivalent to saying "pointed in a(ny) direction."

If I put on my Antivax hat, there is a very simple argument that covers both: "don't put chemicals in people's bodies that are bad for them."

I think you're reading it as "you will be forced to have power X", which was not my intent. I'm sure there will be subgroups like that. The difference is that their lack of ability will be entirely voluntary. (Which, in the long run, may even make things better?)

The one thing that the Singularity cannot provide is a feeling of overcoming scarcity in an absolute sense; of advancing the cause of humanity. Because to advance is to struggle to get from here to there, and "there" is the absence of scarcity. The journey may be the goal, but the goal of a journey is still to progress; this is inherent and unavoidable.

Good, complicated question. We are I think agreed that adults should (usually) be able to do whatever. We are probably also agreed that very young children should not have their life outcome dominated by whatever decision they hold at any given moment. I believe it is also uncontroversial that children in plainly abusive (violent/sexual) households should be removed. Between that, I think this worry is overstated - parenting is also a skill whose scarcity will be reduced by the singularity.

Maybe if his children want to leave for a month, they can; it is then his problem to avoid this. I don't know where the actual degree shakes out; I suspect the actual numbers will be relative to circumstances. Presumably an AI will be able to analyze if an intention to leave is temporary or stable; this should affect decisionmaking. (Imagine how uncontroversial trans would be if satisfaction and outcome could be perfectly forecast.) But in sum, I simply think we have a warped picture of the tradeoffs involved in liberty vs parenthood due to the fact that we live in a very broken world filled with people who are very bad at what they do.

I think it's a hostile phrasing but correct in structure. I guess it could be accused of being an extrapolation. At any rate, it's hard to see how one would avoid it.

One man's "let's preserve human society" is another's "let's preserve the status games that unceasingly victimize me."

I think you're viewing this as "A says they have rights to B's body", whereas parent is viewing it as "C is saying they have the right to prevent what A and B want to do with their bodies."

Great, suffer then. That doesn't give you the right to impose suffering on others.

I expect AI to reduce safetyism because safetyism is, optimistically, a result of uncertainty and miscommunication. If you have poor eyesight, you wear glasses; if you have poor hearing, you wear a hearing-aid. My expectation is that many to most people will opt into prosthetics that give them improved social cognition: a feeling, in advance, for how something you're intending to say will be received. Alternatively, you can literally get the AI to translate vernacular, sentiment and idioms; this will be useful when leaving your peergroup. Furthermore, it will be much easier to stay up to date on shibboleths or to judge cultural fit in advance.

Humanity suffers from a massive lack of competence on every axis imaginable. We cannot now imagine how nice the post-singularity will be, but for a floor consider a world where everyone is good at everything at will, including every social skill.

I wouldn't expect a paper where LLMs were trained on performance of their own generated code, and maybe fed profiler results, to report that afterwards, the LLMs still wrote trash python. Part of the issue here is that the LLMs cannot seek out problems to resolve on their own; though we should maybe expect such breakthroughs to only happen shortly before the singularity.