@aiislove's banner p

aiislove


				

				

				
0 followers   follows 0 users  
joined 2022 October 07 11:25:19 UTC

				

User ID: 1514

aiislove


				
				
				

				
0 followers   follows 0 users   joined 2022 October 07 11:25:19 UTC

					

No bio...


					

User ID: 1514

moral outlook

Actually I find this to be the most universal piece of the puzzle beyond any more objective measurements. For example half the world drives on the right and half the world drives on the left, but the moral fundamentals beneath which side of the road you personally decide to drive on are universal regardless. You choose depending on whether you want to safely reach your destination or create chaos and accidents around you. The moral goals and is-ought problem leads to the same or similar results whether you choose to drive on the right in america or the left in the uk. That is a simple example for illustration's sake but I believe that most problems follow this pattern as well. Treating people kindly and with love and trust is always the solution to any is-ought problem in any culture I've been to because it absolves yourself of the guilt of having acted unkindly or unlovingly and if someone interprets it incorrectly it is not because your underlying intentions were wrong. Maybe this is too much of a consequentialist view that collapses morality into the mind of the actor too far but again we arrive at the uniqueness of the self's actions apart from any others, which would potentially be overcome in an artificial universal consolidated worldview.

Other than that I agree with everything you said and relate to your experiences as well. I agree that we each individually have an inability to fully describe the capital-T Truth but a general AI with infinite knowledge and sources of data interpreted outside the frame of an individual would either be a step toward a new integrated model of understanding or perhaps just the false appearance of such.

I agree with you with regards to just LLMs, but I was imagining more of a general AI in the future which would be fed infinite streams of data in every language and place on earth that would lead to some singularity or consolidation of worldviews and perspectives impossible to individual humans.

Of course which reinforces my strongly held belief in linguistic determinism. Languages reflect reality only to the extent that they can describe it and their description of reality is likewise shaped/reinforced by the language it's parsed in.

On the other hand I'm imagining a general AI that could be fed infinite realtime data from infinite cameras, microphones and news sources from all over the world, it would inevitably start to bleed its understanding outside of the frame of one context and synthesize all of its input feeds into some universalist perspective that would be outside the realm of understanding of any one person who brings their own specific context to any information (as universalist as they may attempt to be or imagine themselves to be.)

The Blind Men and an Elephant

The parable of the blind men and an elephant is a story of a group of blind men who have never come across an elephant before and who learn and imagine what the elephant is like by touching it. Each blind man feels a different part of the animal's body, but only one part, such as the side or the tusk. They then describe the animal based on their limited experience and their descriptions of the elephant are different from each other. In some versions, they come to suspect that the other person is dishonest and they come to blows. The moral of the parable is that humans have a tendency to claim absolute truth based on their limited, subjective experience as they ignore other people's limited, subjective experiences which may be equally true. [from wikipedia.]

As someone who travels between cultures frequently, I find myself thinking a lot about this parable. Everywhere I go, different people in different places have developed different views and interpretations of the world, but the underlying fundamentals of reality remain unaffected by mere human perception and interpretation. In other words, the elephant remains the same regardless of the spot we’re poking at, rubbing against or cutting into.

I find myself reorienting what I experience and perceive from the viewpoint of my background and upbringing, shaped to some degree by my current context. When I meet new people, I compare them to people I was raised around, my friends and family back home. When I try new foods I orient them in relation to foods I was raised with and are most used to. When I experience new weather patterns I compare them to the climate of my birth. Inextricably I am linked to the time and place of my upbringing.

I was raised in a chaotic home environment between divorced parents. My mother was very strict and had many rules, while my father was very lax and enforced very few rules. My mother raised me in the Protestant church while I attended Catholic school for two years, then I was switched to public school in third grade. The inconsistency between Protestant, Catholic and secular worldviews left me very disenchanted by competing narratives and viewpoints that each assert their own contradicting universal realities which I remain suspicious of today.

General artificial intelligence could be capable of synthesizing the perspectives and contexts of every place and time into one universal viewpoint. Mapping out the elephant of the world with more objectivity seems more plausible than ever before. The self assuredness of modernity and the arrogance of postmodernity (Fukuyama’s end of history, for example) are likely to be dwarfed by the self assurance of any newly synthesized panopticon of awareness that an AGI could run on.

But would an AGI be capable of synthesizing every view of the elephant into one accurate rendering of reality at all, or would it merely be able to switch from one perspective to another? The Japanese conception of reality works well enough in the Japanese context, and my basic understanding or exposure to it is amusing enough to me as an outsider, but start poking at it a bit and the construction begins to fall apart. We westerners are just as bound by the false or skewed construction of the Western viewpoint, which is difficult for us to perceive the limits and contradictions of.

I wonder if the AGI will be a Tower of Babel of sorts that could give the illusion of unity and progress but that ends up dividing us further than ever before.

Actually, the thought of a universal synthesized view of the world is what is most frightening to me because it is so utterly foreign to anything we’ve ever come up with ourselves. Either we will discover things we never wanted to know about ourselves and the universe, or we will fail to discover those things and create an even more dystopian world that further reinforces the skewed, convenient beliefs that I believe we already build our societies on.

——

Many people on the right believe that right wing thinking is fundamentally the position of believing in the power to change things: The power to make different decisions, free will, and so on. But in my years of reading right wing thought, the concept that feels the most fundamentally grounding in right wing theory is the idea that nature remains constant. That is, that the elephant remains the elephant regardless of our interpretation. This is the most reassuring concept to me in right wing thinking: that I don’t need to make the Sisyphean effort to rewire my reaction to things outside my control, that I can just accept them as immutable forces of nature and move on with my life. I also think this is a more loving, understanding view of the fundamentals of reality compared with the left’s struggle to undermine them.