@InsanityCheck's banner p

InsanityCheck


				

				

				
0 followers   follows 0 users  
joined 2022 September 07 05:54:49 UTC

				

User ID: 932

InsanityCheck


				
				
				

				
0 followers   follows 0 users   joined 2022 September 07 05:54:49 UTC

					

No bio...


					

User ID: 932

If you punish a child it often throws a tantrum. If said child is "stronger" or more capable than you, that can be an issue. Why should it listen to you. Do you accept punishment from other people?

The only reason humans are "aligned" to each other is because we are not that different, capability wise. No matter how brilliant you are, if you break the law there is a chance to get caught, which is risky.

Regarding initialization: Yes they (mostly) converge to the same performance - on the training data. How the network behaves on out of distribution data can essentially be random, and should be.

Lastly, there are actually "optimization demons" in LLMs. A recent paper showed that LLMs contain learned subnetworks that simulate a few iterations of a gradient descent algorithm. I have, however, not read it in depth, might be stupid (as much research is nowadays)

I did some CBT a few years back, and one of the things I most appreciated was being held responsible.

Learning to handle anxiety is not fun. I could have gotten most information from reading a few articles or books and then not acted upon it. It helps having another human involved in your process. You are not afraid of being judged by GPT, but I think you need that to get your shit together.

Nowadays this would help me much less I think, as I am able to hold myself responsible for my goals. And even though therapy helped a little, I am very skeptical of its general use. I think of all the people I know doing therapy, less than 1 in 4 have actually "solved" their issue, and those that have solved it are mostly low-level anxiety people, while those that haven't are Depression/Bipolar level people.