@astrolabia's banner p

astrolabia


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 01:46:57 UTC

				

User ID: 353

astrolabia


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 01:46:57 UTC

					

No bio...


					

User ID: 353

public school teachers are a parasite class? this seems like it's painting with a broad brush.

Thanks for confirming. I thought I was taking crazy pills. I downgrade my initial assessment, this was not a great story.

Yeah this is my take too.

Also, maybe I am dumb but I still don't understand why it helped the AI that there was a manual override.

Sorry I wasn't trying to be snarky. The original comment seemed to make it seem like truth about matters of fact was a minor issue that could be routed around, but in my experience that was the central problem with all religions.

Is it? As someone who took religion seriously enough on its own terms to be turned off by its falsity, it seems like a central problem that no religion has solved, by virtue of them all explicitly saying false things.

Great incomprehensible story. Some feedback:

  1. Did the sister end up mattering? What was the first guy supposed to do differently if his sister was in the straight?
  2. The American and Japanese AIs are just supposed to collaborate without a dispute resolution mechanism? That sounds dumb.
  3. I didn't follow how it mattered that this particular person was woken up. What was she supposed to do differently than the others? Why didn't she want to be honest?
  4. "We have no confirmed evidence" - I think a general would be able to smell BS at this phrasing. Why don't you just have her lie?

Probably Aella

I agree a whole pack at once might do it, but I don't expect that the whole pack would explode at once.

... unless it were encased in bone!

Yep. If everyone carried around a big red button that would kill them painlessly and instantly, even if they were never pressed "accidentally", I think most people wouldn't make it very long.

What kinds of people have the power to end Trump? His staff?

I thought Dradis was just saying that Westerners could honestly and accurately speak and reason about problems with Eastern Europe without sounding racist, so they were able to effectively deal with reality and achieve their goals.

What a payoff, thanks for asking, @butts!

This guy reminds me a lot of a younger version of myself. I suppose stoicism is a skill - I certainly attempted to reach inner peace with a combination of weed and videogames at some points, but was driven to try at life by a desire for women. It sounds like this guy was "scared straight" from engaging with the world.

This guy honestly sounds accomplished to me in his own way. He really learned to tend his own garden! I wouldn't trade my life for his, but if I was a NEET, I can't think of a nicer setup.

I wonder if his parents could have done anything differently, or if this is just the way it goes sometimes.

the incentive to be truthful and honest is minimal.

Except to the extent we can avoid doom through correct perception and action.

You've almost exactly described elite STEM PhD programs!

This is amazing, it's like Borges' Library of Babel.

I think if I were Amazon, I think I'd have a hard time drawing a line between actual content and low-effort slop. Though honestly that sounds like a great use for LLMs.

Relatedly, there was a poster here who once explained that the main rationale for the separation of church and state was not to improve governance of the state, but to protect the church from the corrupting influence of power. Blew my mind at the time but makes total sense now.

Let's say you overlooked telling it about some fairly critical detail... It's not going to be able to figure that out on its own.

Right, but neither would a human, unless they also had more direct access to the problem somehow. But that's what agentic scaffolding is for.

There's also more general issues with agentic systems specifically and how quickly they seem to fall victim to noise and hallucinations without human supervision

Even with tens of thousands of experts spending billions of dollars and R & D for a decade to solve these problems?

but I'm more on the fence as to whether this can be ameliorated.

Seems like you're retreating to "I'm not sure"?

I'm still confused what you're claiming. Who is claiming that cognition is entirely reducible to statistical inference? In any case, are the LLM companies somehow committed to never using anything but statistical inference?

It really is remarkable the strength of claims that otherwise smart people will make about the impossibility of AI doing something. As evidenced by IGI's reply, I think usually if someone has gotten this far without updating, you shouldn't expect a mere compilation of strong evidence to change their minds, but just to prompt the smallest possible retreat.

I had an amazing conversation with an academic economist that went along similar lines. I asked why his profession generally wasn't willing to even entertain the idea that AI could act as a substitute for human labor, and he said "well it's not happening yet, and making predictions is beyond the scope of our profession". Just mind-boggling.

To empathize a little, I think that people intuitively understand that admitting that a machine will be able to do everything important better than them permanently weakens their bargaining position. As someone who hopes humanity will successfully form a cartel before it's too late, I fear we're in a double-bind where acknowledging the problem we face makes it worse.

Can you help me understand this claim more concretely? E.g. if an LLM had just successfully designed a bridge for me, but then I modified the design to make it not useful in some way, for some kinds of changes it wouldn't be able to tell if my change was good or not? But a human would?

I agree that alignment is easy in the sense of getting models to understand what we want, but it's far from clear that it's easy in the sense of making models want the same thing. RL models reward hack all the time.

What on earth makes you think instrumental convergence "doesn't actually happen"? It happens all the time, e.g. by reward hacking or sycophancy! It's almost the definition of agency!

Neuralese is a myth? What is that supposed to mean? RL on soft tokens is an active area of research and will almost certainly always work better (in the sense of getting higher rewards) than using hard tokens everywhere.

none of it is going to happen in the way the AI safety movement predicts

Care to elaborate? What kinds of things do you think are going to happen differently than the AI safety people think?

Is this a bit? Yes collecting a dataset is tons of work, but tokenizing it is trivial.

I agree with everything you wrote in this reply. But your reply seems to have nothing to do with your message I originally replied to. Why were you mentioning the cost of tokenization?