@Nummaru's banner p

Nummaru


				

				

				
0 followers   follows 0 users  
joined 2023 April 23 12:06:47 UTC

				

User ID: 2355

Nummaru


				
				
				

				
0 followers   follows 0 users   joined 2023 April 23 12:06:47 UTC

					

No bio...


					

User ID: 2355

Yeah, malaria nets are cheap and effective, whilst liver cancer research is complicated

Also, poor people would end up all selling their votes and the resulting government would be bad for poor people.

In less developed countries, this tends to happen anyway, so it misses the point.

Yes, it's a very basic game-theoretical outcome that you should choose red. However, this ignores human psychology- if you account for it, even if no information sharing is present, choosing blue is probably rational (depending on whether you know who else is involved and how important they are for your life). This is because the word "die" attaches a stronger taboo to the red pill option. We instinctually assume that a group would get together and agree on this; 50 % agreement is not that hard, but 100 % agreement is pretty hard.

The whole logic of (alien) UFOs existing but being kept secret doesn't make sense.

Because it's a proto-religious movement, it doesn't have to make a lot of sense in worldly terms. It doesn't seem like it because it uses all these science-y terms, but every religion started with concepts plausible to the populace; how else would anyone believe? Regardless, things seen in the sky having an importance on worldly affairs is not a new religious concept.

Of course, not everyone involved is motivated for this reason, but I think that's the most comprehensive, logical explanation for its start, functioning, and persistence. UFOs are a fantastically adaptive tool for a myriad of central religious questions, as proven by their influx in new religious movements. Even outside that, it's very difficult to escape all the talk about the governments hiding these awesome technologies, obviously piloted by enlightened beings- but just you wait, the truth will come to light; and not see the parallel with the good-evil duality mytheme that still persists in the west.

The reason why government is interested is because it's a useful tool for distracting attention from new military technologies.

It's like seeing an image of jesus on a toast

If you understand it as a partly religious phenomenon (the entire UFO thing) that didn't succeed very much, it starts to make sense. Aside from being a folktale that spreads in spite of the lack of any simple and direct empirical evidence, there are literally multiple UFO religions.

using fractional ratios of reward to available room for reward instead of plain absolute magnitude of reward.

How does it follow that it's a fractional ratio? The only relevant fact is whether the maximum value has been reached. How could it even compare the absolute magnitude, if it can't store a larger number?

However, I am under the impression that in the majority of cases, a reward maximizing agent will simply try to minimize the risk of losing its accrued reward if it's maxed out,

I agree with this, but based on my knowledge of speculative ways to survive until the end of the Universe, few involve turning it into computronium. Presumably, AI would still factor in risk.

Why could you not measure the fitness?

I mean that, in practice, it could never be realised, for the reasons you mentioned- as in, achievement beyond a certain value would be impossible, since you can't strongly solve chess within current physical limits.

I do admit that I'm assuming that computers will not be similarly stupid lol but yes, I definitely thought a little about a comparison with humans.

You never specified that the AI in question had a "maximum" reward value beyond which it is indifferent.

Isn't that kind of implied if it can't store beyond a certain number? Like I said, acquiring more compute to store bigger values of reward is functionally the same as decreasing its value of reward.

If it hits a predetermined max beyond which it doesn't care, further behavior depends entirely on the specific architecture of the AI. It might plausibly seek more resources to help it minimize the probability of the existing reward being destroyed, be it by Nature, or other agents, or it might just shut itself off or go insane since it becomes indifferent to all further actions.

Yes, that's my central question. My argument is that it need not do anything close to apocalyptic for preservation. I am interested in the other possibilities, like "going insane", since I'm not sure what would happen in that case.

You ought to pick an easier goal than solving chess.

Ah, it's just a cliche example. However, I think that you can realistically weakly solve it, nonetheless. You're right that it would take an enormous amount of resources. My point is that it was a close-ended goal- but if you can't even measure the fitness properly for solving chess due to the complexity, and it would potentially ealise the futility, I'm not sure how ultimately relevant it is?

I'm not sure if that's the case. Acquiring more storage for that end means that you're, in the short-term, decreasing the reward value. It's functionally no different (eg. 100/110 and 90/100 have the same arithmetical difference). What's the incentive to go beyond a maximum? That would be like "over-completing" a goal, or, rather, setting a new goal- why would it expand its own laundry list? For example, an AI which has the goal to solve chess, has no incentive to go beyond that, if its reward value is maximum when it does solve chess. The machine is only incentivised to please this, it doesn't have any other prime motivation like long-term thinking. As a simplistic comparison, it's kind of like why very few projects aim to take control of the world.

I don't know a lot about this topic, so I want to see if it makes sense: instrumental convergence is often posed in AI alignment as an existential risk, but could it not simply lead to a hedonistic machine? There is already precedent in the form of humans. As I understand it, many machine learning techniques operate on the idea of fitness, with a part that does something, and another part that rate its fitness. Already, it's common for AI to find loopholes in given tasks and designed aims. Is it a possibility that it would be much easier for the AI to, rather than destroying the world and such, simply find a loophole that gives it an "infinite" fitness/reward score? It seems logical to me that any sufficiently intelligent entity, with such simple coded motivations, would have almost a divergence, precisely because of self-modification. I suppose that the same logic applies to a system that is not originally like this, but turns into an agent.

Essentially: given the possibility of reward hacking, why would an advanced AI blow up the Earth?

I do not know much about TikTok specifically, but every social media has a large section of content aimed at highlighting the objects of people's insecurities. I think that insecurity's compulsiveness mixed with TikTok's uniquely high content turnover rate which also influences the algorithm's recommendations in totality(the watchtime itself is a metric of engagement), gives rise to the amount.

Well, it can still be factually wrong, but like with everything else, it depends on how the person says something. In this case, it's not claiming any fact, just drawing an association which in itself is true.

I know that Jungian typology(and MBTI by extension) is not exactly well-respected scientifically, but these sorts of things are very obvious when you learn it- it's because these people are intuitive types (or engaging in intuition, at least), and such analogies naturally come to them, while they might suppress judgement in the form of thinking more than needed. However, some elements are easily translated to OCEAN, and being very intuitive always means being very imaginative, as a faucet of openness to experience. So you say there is no deeper relationship, and that's true from a thinking point of view, but not from an intuitive point of view.

For anti-semites, Jews are a symbol. It doesn't matter if it's "a long way from 40 %"; they'll just say you missed the crypto-Jews. For those with more consistent concerns in the objective sphere, like fascists, it's just a stepping stone to saying that the majority can decide what proportion is too much.

This is equivalent to any other fatalistic argument, is it not? Why do anything when you will be forgotten, since you will never surpass the greats, since there will likely be countless betters in the future, when your civilisation and culture will succumb to obscurity, when the Earth will end, when the Universe will end... render unto yourself what is yours. No need for concern about the passage or time unless you can figure out how to stop it, or much else that is outside of your control. A few months is yet to pass, etc.