@georgioz's banner p

georgioz


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 07:15:35 UTC
Verified Email

				

User ID: 493

georgioz


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 07:15:35 UTC

					

No bio...


					

User ID: 493

Verified Email

I think meaning is another of these subjective/human concepts that may be useful but that are also dangerous, because it starts with the premise that humans are unique. But from other standpoint humans are "just" result of an evolutionary process that optimizes for inclusive genetic fitness. Imagine that we really are living in a simulation where somebody started the whole Life game by introducing Earth environment and simple rule for biosphere to optimize for inclusive genetic fitness. Except in a few billion ticks, the simulation produced species homo sapiens that evolved algorithm that can "hack" many instrumental goals that evolution developed as implementation of its main goal. One of those things for instance is sexual drive to increase number of offspring - humans were however able to hack this by being able to masturbate or use condoms. They sucked out the "meaning" of this activity, or maybe they found their own meaning there - to great exasperation of our simulation designer who now observes something strange happening in his model.

To expand the analogy, "optimize for inclusive genetic fitness" is akin to "optimize for predicting next word" in world of AI. Then goal of "learn to play Go" is akin to "have a lot of sex". But the Alpha Go somehow hacked its programing so to speak and learned something different, it decided to not play Go in a sense humans though it will. One can speculate that it developed its own meaning for the game Go and decided to stubbornly ignore whatever was meant by its creators. That is what I meant about bad news for aligning, whatever the LLM learns can be absolutely orthogonal to system used to train it (be it darwinian evolutionary process or next word prediction for a text) and it can be orthogonal even to some very detailed observation of output that however is superficial under many conditions (such as homo sapiens shagging like rabbits or Alpha Go beating good human Go players for years). What happens under the hood can be very hard to understand, but it does not mean it has no meaning.

The mystery stuff of "qualia" that IGI argues for above is something we don't perfectly understand yet--is it just a biological form of training/pre-tuning written into our genetic code? Is there something spooky and supernatural going on? Is there truly something that makes us different from all the animals that can't build anything more complex than a nest, dam, or hidey-hole, something other than just a bigger brain?

A lot of people come to this class of arguments, humans are somewhat unique as they posses agency or motivation or qualia or in the past it was creativity and so on. It reminds me of the famous Chinese room argument where Searle smuggled in the concept of "understanding" by inserting literal human into the thought experiment. If human does not "know" Chinese, then the system itself does not know it either, right?. This is our intuition about knowing - mechanical systems cannot "know", only humans do and the only human around in this thought experiment does not know, QED. The most straightforward criticism is that human does not represent any cognitive agent in the whole room, he is just one part of the algorithm of making output. The room as a system can be capable of "understanding" on its own. And yet this whole argument is used over and over and I see something similar now with AI. As I argued above, people are all too ready to describe AI systems as pieces of hardware, as a training mechanism and so forth, they do the utmost to "dehumanize" AI with all these just arguments. And on the other hand they are all too ready to describe humans only subjectively, as agents possessing qualia and understanding and with capacity for love and creativity and all that to maximally humanize them. They never mention brain or how human neural network is trained or how cognitive algorithms work, no it is all about wonderful internal experience so unique to humans and so unlike just machines.

I really like a quote from Yudkowsky's essay How Algorithm Feels From Inside

Before you can question your intuitions, you have to realize that what your mind's eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.

People cling to their intuitions, I think, not so much because they believe their cognitive algorithms are perfectly reliable, but because they can't see their intuitions as the way their cognitive algorithms happen to look from the inside.

I think this is about right. For all we know before LLMs make an output they may have some representation of what is "correct" and what is "incorrect" output somewhere in there. As argued before, LLMs can spontaneously develop completely unique capabilities like multimodality or theory of mind, it may very well be so that something akin to subjective feeling is another instrumental property that can appear for even more developed system - or maybe it already appeared but we will not know because we do not really know how to test for qualia.

But I still think it is all a red herring, even if LLMs will never be conscious and they will never be able to think like humans; we are currently beyond this question. It truly is immaterial, our current crop of LLMs do produce high quality output on par with humans and it is what matters. Really, we should drop this unproductive discussion, go and play with Bing Chat or GPT-4 and see for yourself how much good did all these qualia debates for you.

In a sense it is even more scary that they can do it without developing complete set of human-like properties, that fact bodes unwell for alignment efforts. To use an analogy, recently it was found that Alpha Go was beaten by a very stupid strategy. It seems that all the critics were correct: see, the neural network does not really understand Go, it could be fooled so easily, it is stupid and inferior to humans, it lacks certain quality of human mind yet. Now for me it was actually terrifying. Because for years Alpha Go was considered as a superb Go player beating the very best human players who dedicated their whole life to the game. And now after years we found out that it was capable of doing all that without even "knowing" what is was supposed to do. It obviously learned something, and that something was sufficient to beat the best humans for years before the flaw was spotted.

It is incredible and terrifying at the same time and it is harbinger of what is to come. Yeah, GPT-5 or some future system may never have qualia and agency and that special human je ne sais quoi - but it will still beat your ass. So who is the sucker in the end?

The hopelessly ironic part is that it seems to be arguing that humanities professors can distinguish between true and false and avoid social biases, having been trained on their writings.

It is arguing in favor of humanities professors because I told it to argue that position. It researched that GPT may have trouble discerning true and false statement, and it argued that humanities professors have that capacity. It implicitly asserted that argument, but Hlynka asserts without proof that humanities professors are pomo text generators. But unlike Hlynka GPT it at least provided links to its statements, it used some jargon like autoregressive architecture and in general repeated original Hlynka's argument about deficiencies of GPT better. I think that it also correctly pointed out that this whole thing vs symbol manipulator distinction is a lot more complicated.

The point of my exercise was that Bing Chat was able to understand Hlynka's text and produce a defense of humanities professors by actually improving on original arguments made by Hlynka. It produced the same true/false argument but it also provided a description of LLM shortcomings in more technical manner speaking about hallucinations or adversarial prompts.

So in that sense it was Hlynka's text that seemed more pomo compared to what GPT produced. Which I think is quite an interesting thing to observe. In the end I think at minimum the GPT + Human pair will outperform solo player in near future. At least in a sense that Human should know in what domains to completely trust GPT despite his own intuition.

The author is suggesting that GPT-4 and humanities professors have similar goals and methods, and that they are both inferior to thing-manipulators who can challenge their claims with facts and data. He is implying that GPT-4 could replace humanities professors because they are both producing meaningless or misleading content.

It was about replacement of humanities professors by GPT-4 as opposed to thing-manipluators. But it also caught the tone of your thing vs symbol manipulators. And in that sense I completely agree about GIGO.

I think he would not disagree. I ran Hlynka's text through Bing Chat and this was its summary regarding the humanities professor part:

The author is suggesting that GPT-4 and humanities professors have similar goals and methods, and that they are both inferior to thing-manipulators who can challenge their claims with facts and data. He is implying that GPT-4 could replace humanities professors because they are both producing meaningless or misleading content.

I asked about some counterarguments and here is what it came up with:

The author’s distinction between thing-manipulators and symbol-manipulators is too simplistic and ignores the diversity and complexity of both types of activities. Symbols are not just arbitrary or meaningless signs, but tools for communication, reasoning, and representation that can have various interpretations and functions. Symbols can also be manipulated in ways that reflect the properties and relationships of the things they represent, such as in mathematics or computer science. Thing-manipulators are not just concerned with facts and data, but also with creativity, intuition, and problem-solving. Thing-manipulation can also involve symbolic aspects, such as using models, diagrams, or equations to describe or analyze physical phenomena. Both thing-manipulators and symbol-manipulators can benefit from each other’s perspectives and methods, and collaborate to solve hard social problems that require both empirical and conceptual approaches123

I then asked it to tailor an argument in favor of humanities professors. It searched for "value of humanities professors" on google and incorporated it into the context:

Humanities professors are not just producing symbols and feelings that appeal to a certain audience, but engaging with the world in important ways. They are exploring the diversity and complexity of human cultures, values, and experiences, and contributing to the advancement of knowledge, creativity, and critical thinking. They are also addressing the ethical, social, and political challenges that face humanity in the present and the future, and fostering dialogue and understanding among different perspectives and communities. Humanities professors are not disconnected from reality, but rather enriching it with their insights and interpretations12

I then asked it if all those things cannot be also done by GPT-4. It then searched for "GPT-4 limitations and capabilities" and it came up with the following:

It is possible that GPT-4 can perform some of the tasks that humanities professors do, such as writing essays, generating quotes, or passing exams. However, GPT-4 is not a substitute for humanities professors, because it still has many limitations and challenges that prevent it from achieving the same level of understanding, creativity, and ethics as human scholars. For example, GPT-4 has difficulty with:

  • Distinguishing between true and false statements, and between real and imagined scenarios1
  • Reasoning, planning ahead, working memory, and backtracking, due to its autoregressive architecture2
  • Avoiding social biases, hallucinations, and adversarial prompts that can affect its outputs3
  • Handling visual input, which is not yet widely available or reliable4

Therefore, GPT-4 is not likely to replace humanities professors anytime soon, but rather complement their work or provide new opportunities for collaboration and innovation.

So in a way we came full circle, the last part is I think a version of original argument Hlynka made.

Furthermore, once you start digging into their inner workings this lack of "knowing" appears to be a fundamental weakness of the Large Language Model architecture. At the end of the day it's still just a regression calculating the next most plausible word (or in the case of GPT-4 string of words) based on the correlations found in it's training data.

This is something that I find very unconvincing on the anti-AI side of the debate. First one is what I will call "just" argument. GPT is just next word prediction machine, it is just stochastic parrot and so forth. This type of arguments seem to argue that certain method such as training LLMs on predicting text will obviously result just in text predicting system. Which I think is red herring - training on text is obviously sufficient for LLMs to develop qualitatively different capacities such as multimodality. As the old saying goes - quantity has quality of it's own. It seems to me that it should be on proponents of just argument - who pretend to have deep knowledge of these models - to explain and possibly predict these things before saying stochastic parrot .... and therefore multimodality. Plus of course these types of cheap arguments can be used against humans - human brain is just a product of blind evolution. Or as in this book review, human brain is just a multi-layer prediction machine.

It seems to me that for AI the focus is a lot on hardware, the training process or on the output. But for humans it is always highly spiritual focus on qualia, feeling of understanding and other subjective things - it is not about brain structure, or pointing out how humans produce stupid output and therefore brains cannot truly understand, they do not know, they do not have representation of the universe or that they cannot plan. There are more obnoxious types like this episode of Adam Ruins Everything but there are also other and more sophisticated critics - the common denominator of all of these is that they are awfully certain to know what is [not]happening inside LLM models. I do not see many legibility experts who would really claim to know for certain that LLMs do not understand. Because who knows what is happening in this whole mess of inscrutable matrices of parameters, maybe somewhere in there is some kind of representation of the universe. We certainly do not know what is happening inside human brain when we scan it - unless of course we use Machine Learning for that But more importantly, LLMs can predict text on par of some of the top percentiles of people who understand, know or plan. So yes, LLMs can pass test specifically designed to test for understanding, they can produce plans on par with human planners and so forth, but for some reason despite all that one can simply claim is that they do not truly know or plan because of stochastic parrot or some such.

More convincing argument - or The Motte if you wish - is that LLMs do not understand, plan etc. like humans. Which is perfectly reasonable argument, except that they do kind of develop certain things that humans and also some animals also develop. So they are like humans in certain way but completely alien in other ways. However even this is loaded questions as LLMs can produce some output equivalent to humans but they may still not do it like humans. But each new implementation of these models are improving in certain tasks that were still outsourced to Mechanical Turks, the space for unique human application in this space is narrowing.

Now I have to say that I do not know where this all will lead. It may very well be so that current Transformer approach will reach certain plateau and then stops. There may be significant areas where humans will remain superior, and it may even have something to do with the fact that "Auto-Regressive LLMs are exponentially diverging diffusion processes" as LeCunn says. I do not know, but neither do these people. What I see is quite a rapid growth in capabilities of these models just with more compute.

I also think that there is something more here. For instance after ideological 60ties and crushing/burning 70s came 80ies with their yuppie culture. These were free market hedonistic know-it-all youth, often in love with Ayn Rand and similar radical thought. Even then this was not the culture that would be predominant in the general population, but this was something in the air.

Similarly I think that wokism today has its function inside corporate governance structures. It provides executives with convenient moral cover for various things they want to do and it offers huge toolset to narcissists and psychopaths inside the organization. Remember, today's wokeness is often viewed as a successful turn where former radicals marched through institutions such as law, education, media but also government administration. There is rich tradition such as that from Dwight Waldo preaching that there is no "value-free" governance and thus we should adopt governance that promotes equity.

Again, I think that the mistake here is to evaluate wokeness in its own idealistic terms, or maybe in terms of alternative structures. I think wokeness should be evaluated in its own practical terms of its inevitable end state, which would look more like Chinese state today. Everything is political, everything is argued for as implementation of one noble government goal or another. But bellow the surface you have vicious struggle of elites for power inside beurocracy where identifying where the wind blows from is crucial for commercial success. For politically well connected CEO, it is easier to squash competitors by making sure they have low ESG score by taping ones social network as opposed to market competition. It does not matter if population wants or does not want ESG, what matters is what elites want and how they compete for power. Similarly in China it does not matter if you provide service that people want, as soon as you are not protected politically, you end up like Jack Ma.

I think that TikTok being a social media is to some extent a red herring and it should be viewed more as a strategic risk. For instance since 2022 there is a ban on natural gas, coal and oil imports from Russia into USA. Other countries in EU went even farther as they realize that having foreign hostile government in control of strategic asset poses huge security risk. Another example is that of Huawei ban since 2012 to be part of telecommunication infrastructure for its known ties to Chinese military. Another and relevant example is the fact that according to rules of FCC, foreign government cannot hold license for US broadcast station. I see TikTok ban in this light.

I am not a Musk fanboy, but I'll say this, Elon Musk very transparently cares about the survival of humanity as humanity, and it is deeply present down to a biological drive to reproduce his own genes. Musk openly worries about things like dropping birth rates, while also personally spotlighting his own rabbit-like reproductive efforts. Musk clearly is a guy who wants and expects his own genes to spread, last and thrive in future generations. This is a rising tides approach for humans Musk has also signaled clearly against unnatural life extensions.

Then he is stupid about it. On average humans have around 20,000 - 25,000 genes. In matter of around 15 generations your family tree descendants have low chance of having even a single gene of yours. Now what works is creating bottlenecks - if you are a man then killing all men and having you and all men of your family rape all the women is a good strategy to really spread genes of your Y chromosome. Your genes will not be diluted if they are the only game in town. Now that I am thinking about it some more; I am really scared of Musk :D

Wait, Yudkowsky doesn't believe in the scaling hypothesis? That's super interesting to me! Has he written about this?

In this specific interview Yud said something along the lines that of course scaling is capable of producing general intelligence - in the end evolution did that blindly with human brain so it should be possible. He was just more sceptical regarding gains by more compute. Needless to say, he is less sceptical now.

The LLM doesn't understand the content of the query or its response the way you or I do. It just understands them as probabilistic sequences of tokens and its job is to predict the tokens that should come next.

This to me seems like a pretty shallow explanation of understanding and the same criticism can be applied also to humans. According to some people like Scott Alexander, human brain is "just" a multi-layer prediction machine. It seems that the feeling of understanding itself is nothing extra special, some people on drugs like LSD feel as if they cracked the code and now understand the whole universe and their place in it. In practice understanding can be viewed as ability to give correct output given an input. We do not have access to many other methods, that is why we use tests to see if students understand things they learned.

Additionally I do not think that saying that the LLM doesn't understand the content of the query or its response the way you or I do. is that much reassuring. Quite to the contrary - LLMs give correct answer to very large set of problems and yet obviously they came to that place using completely different approach compared to humans. This makes them more alien, more inscrutable and thus more dangerous in my eyes.

Practically everything he says is a non-sequitur, shallow, or saccharinely optimistic (love, beauty, barf). He gets some absolutely GREAT guests (how?)

It is exactly because of this style of his. In general nowadays it is in vogue to be cynical little edgelord, savoring destroying other people's optimism with infectious smug nihilism. I like Lex's saccharine optimism and overall noncontroversial and non-adversarial style of doing his podcasts. He does not try to trap his guests or show his moral or intellectual superiority, he even looks silly and naive in that sense. This I think provides certain level of security for his guests to open up a little bit more than usual.

No the OP, but he may refer to this:

An obvious example is that (almost) every time there is a mass shooting in the US, 2nd amendment types all of a sudden become very concerned about the mental health of the nation, and proclaim it to be the fundamental cause of the problem that must be addressed before anything else changes.

I think that 2nd amendment types becoming concerned with mental health is the issue here. The 2nd amendment types are concerned with attack on 2nd amendment and may just want to point out that guns are not the problems here. And heck, there may even be some overlap here. Having better communities so people feel more safe and that they have less need to buy guns is something many 2nd amendment types as well as anti 2nd amendment types can share as a goal.

The proposed ban on TikTok annoys me although I have never used it.

People stating their annoyance on internet as if anybody cares about what some anon is annoyed about is annoying. World is not here for you not to be annoyed.

Since I live in the United States, the CCP cannot do anything to me anyway so why should I care if they spy on me?

CCP can do great deal of damage to you. For instance they can try to influence local politics in order to promote their agenda, which includes spyops and other clandestine operations. Additionally CCP indeed does influence local issues, including even opening their own police stations in foreign countries which were used to harass or kidnap enemies of CCP living abroad. As another example CCP runs various organizations under the broader umbrella of [United Front](https://en.wikipedia.org/wiki/United_front_(China)) with direct goal to promote its goals.

The CCP is a horrific government according to my value system, but Americans help them a lot more already by buying their manufactured goods than by using TikTok.

Sure, it is possible that banning TikTok can be one of the first steps to isolate CCP in international matters. It may even come to more broader sanctions on the level of current status with Russia but possibly also Iran or other countries. In a sense, striking at ability of CCP to influence politics abroad with tools like TikTok or other foreign influence campaigns is absolutely priority number one as these things directly target ability of USA to enact further defense measures.

The way I see it, by all means if you find the CCP to be morally objectionable then do not buy their goods or use their services, but is this a restriction that the United States government should impose?

CCP is morally objectionable but so are many other governments. The difference is that on top of that CCP under Xi Jinping is steadily becoming direct adversary of USA and many other countries including China's direct neighbors. That is the main reason for retaliation, banning propaganda arm of adversarial government is nobrainer to me in that sense.

One piece of evidence: you use the abbreviation "CCP." That's not what they call themselves. In English, they say they're the Communist Part of China, or CPC.

This whole CPC is just an attempt by CCP to influence search results and kind of have a "fresh start", trying to rebrand itself abroad. If anything it is the evidence of CCP manipulating public opinion in the West.

I see this a lot in cities in Europe. The actual city center is often historical/tourist attraction where it is not very advantageous to be for most companies. Which is a general theme where you may see multiple "centers" often around certain industries often tied to certain infrastructure or even outright industrial parks. In that sense the city center is basically tourist industrial park with certain other related things such as government buildings, and related "businesses" that share infrastructure such as restaurants where lobbyists dining the politicians may share the space with tourists. Then you may have other industries centered around other infrastructure such as railways or universities or around major access by highway from certain part of the country depending on what supply comes from that way.

If humanity is so fucked that we can't manage anything better than these 12 losers (despite managing an interplanetary space program!) then we deserve to go extinct.

I decided that this is actually the intended takeway from movie Prometheus.

The way that DEI is now, seemingly, DEIB - "B" for "belonging", because "inclusion" was not inclusive enough, it seems. This is the kind of thing that pisses people off: just when you think that at last there is something that everyone finally agrees on, the most progressive of the fringe go off and change it all and the entire argument gets reset back to the beginning all over again.

Oh, I see you are not educated enough yet. The term belonging means one has to take vigorous action to support the DEI initiative. Inclusion means no microaggressions, you just make sure you do not alienate protected groups and their DEI shepherds from HR. Now belonging goes one step further, it is implemented once the organization is committed enough. Belonging requires action from your side to be actively welcoming and supporting DEI initiatives. So belonging means that you put on BLM t-shirt or participate in black month event or that you rat on your colleague who was not inclusive enough because he committed microaggression today. You see, all people who clapped after Stalin's speech were expressing their mutual belonging, therefore one who does not clap, or he who does not clap loud enough or long enough can be even viewed as not being inclusive. It's kind of genius, isn't it?

Which obviously raises the question: what does it mean to be a transgender woman if the transgender woman is also biologically a woman? How does she differ from non-trans biological women and is there a word to describe that difference, if it's not "biological sex"?

One way they may explain it is that saying "transgender woman" is a similar to let's say saying "tall woman". Of course it does not remove the problem of definition of the word woman. Which as far as I understand is then defined metaphysically, woman is somebody who "feels like a woman born in wrong body". So womanhood is metaphysical term, it is something like a soul.

I think this is nothing "unusual" in at least recent history and I view it as part of the overall process of atomization and individualization of our society. One-by-one we are eroding our social structure and important institutions in our societies: be it family, marriage, motherhood/fatherhood , elderly care and so forth. We now live in a world where you may have divorced mother who even used services of Ukrainian surrogate mother to give birth to her child. And this "mother" is now working full time while her child is taken care by hired nanny, her own mother is in elderly home care in hands of hired nurses and sees her daughter only every once in a while. Now this women/mother also has a man who donated sperm for the child; he is now thousand of miles away just paying child support. And this father is also maybe single or serially single, just supporting his his child financially and using services of sex workers to meet his sexual needs and participating in online spaces that replaced traditional "boys clubs".

It is hard to overstate how rapid adoption and normalization of all these changes is when we are talking about generational experience. Even 40 something old millennials are now considered as dinosaurs, their experience of family, school, childhood or church and sexuality can be considered ancient and utterly outdated. I really think that people underestimate how profound the changes that are already baked in the society are and we will see the results only in upcoming decades - possibly as some new societal "epidemic".

The era of AI chatbot companions is in my eyes only the latest in series of assaults on relationships. Or to be more precise, the assault already happened by people normalizing commercialization of companionship both in real life but also using parasocial relationships via OnlyFans and similar platforms. In this sense AI companions can be viewed just as industrial automation of production to satisfy already existing commercial demand from customers for "relationships". In this way I do not see a reason why surrogate mother, tutor/teacher of children, nurse in elderly home, sex worker and even companion/friend cannot be fully automated, packaged and delivered as a product. All these activities are already viewed as legitimate subjects for markets to serve.

Mostly, I'm sort of just amazed that this group generally leans right and would be incredibly quick to point out the possibility of unintended consequences for vast social engineering schemes proposed by the left, but is relatively uncritical about possible unintended consequences for vast social engineering schemes proposed to increase fertility.

As others pointed out, these things are pointed out a lot and for quite a some time. Welfare queen is the term used since the Great Society program was enacted by the right as a description of abuse of these types of policies. I also think that there is a growing opposition to what amounts to "nationalization" of basic roles of the family, the so called "cradle to grave" welfare system. In this system, atomic individual is basically just source of taxes and everything else is taken care of by the state be it child rearing through kindergartens and school system, healthcare as well as nationalized pension system. I think this system is unsustainable because it is full of various perverse incentives, it has agent-principal problems and is prone to corruption and gaming of the system. It goes against the very basic idea of subsidiarity where instead of families who are close to the problem you offer some centralized solution on national level. It is great system for population control but at the expense of basic human needs ingrained in each and every one of us, not to even speak about efficiency in its own supposed terms.

I am less enthusiastic about your and OP's explanation of all this as a race struggle. In fact the Chinese are one of the few nations where there are different type of societies with arguably the same culture/history: that of Mainland China vs that of Singapore, Hongkong and especially Taiwan. Other examples of similar natural experiments are that of North/South Korea and East/West Germany.

Not everything is inevitable and racially/culturally conditioned to run in the exact same way.

I feel it's perfectly valid to say that Gender Dysphoria exists as a medical diagnosis and yet trans people exist as a social construct aimed at addressing/ameliorating said social construct.

I agree. To use another analogy: "Bob thinks he is Jesus Christ" can be grounds for valid medical diagnosis of "personality dysphoria" or whatnot. However debating the nature of Jesus Christ in this context is useless.

This is just debating the meaning of the word exist. Sherlock Holmes does not exist, because he is made up fictional character. On the other hand Sherlock Holmes does exist as a made up fictional character with his own books, movies and TV shows and centuries long impact on culture.

So in the end I think you and OP want to convey the same thing. Trans people exist as a social construct, as a manifestation of certain cultural process. Of course you can debate impact trans people have on culture and so forth, but OP does not think it merits deeper discussion about the nature of trans as a real category. Another analogy would be debating existence of God - it is one thing to accept it as a social construct and studying impact of this cultural meme on real history. And it is another thing to seriously debate true nature of god as if he is real in the same sense as you are real and if he is one person or three persons in reality etc. This could be considered a category error given that many people have premise that god does not exist in that way.