@DaseindustriesLtd's banner p

DaseindustriesLtd

late version of a small language model

66 followers   follows 27 users  
joined 2022 September 05 23:03:02 UTC

Tell me about it.


				

User ID: 745

DaseindustriesLtd

late version of a small language model

66 followers   follows 27 users   joined 2022 September 05 23:03:02 UTC

					

Tell me about it.


					

User ID: 745

Race cannot be gamed (except for edge cases). The whole point of race is its inherence. Any legible meritocratic evaluation immigrants can and will game, Goodharting the hell out of it and wrecking themselves in the process.

even just English proficiency might suffice.

Why is having had British colonial masters a marker of cultural compatibility?

Furthermore, I don't think it's so hard to do some cursory test for cultural compatibility that again, would be much better than the weak proxy of race.

Cursory, adj: hasty and therefore not thorough or detailed

Why "cursory"? Because you want it to be gameable? Because you actually want it to test your merits – namely, opportunism and ability to manipulate bureaucracies to your benefit? See, this is exactly whom people who are arguing for racial criteria would like to not let in.

That said, I think racial profiling is indeed unfair if it goes beyond defining vague priors. It's desirable to filter immigrants for their comprehensive human capital.

It's just… Suppose you were not allowed into the world's richest country on grounds of your character, which was found wanting not through stereotyping you based on race, but through, de facto, systematic measurement and determination of your similarity to your predominant racial type and dissimilarity from natives.

Of course, this can be couched (and even understood by practitioners) in entirely non-racial terminology, like Harvard does – they would just have a holistic psychometric definition of a desirable immigrant, derived, say, from anonymous surveys of natives' evaluation of character and assimilation success.

Would you be willing to recognize this as a fair choice, or would you support work to undermine it as covertly racist?

The hell does any of that mean? Let's go step by step:

Why would Trump supporters care

Does it look like he's talking to Trump supporters even to express contempt?

what Hanania thinks when he's expressed nothing but contempt

Because arguments are to be considered on their merits, for one thing, and there is demonstrably some merit to DeSantis but zero merit to Trump as a political representative? Because this attitude makes you extremely vulnerable to trivial manipulations?

By the same logic, why should I care what you think of me?

What logic exactly?

And why would I care whether you care or not? Scratch that, what even is this inane macho train of thought about caring or not caring, this one-upmanship? Hanania talks of Trumpists from a zoological perspective; I am using you as a reference point. You are demanding gestures of unconditional respect for your position as advance payment for deigning to engage at all. But this precludes the possibility of any conscious change on your part, thus makes debating you a waste of time.

Still. Let me elucidate my opinion: the point is not to convince you of anything. The point is to convince those on the margins of the Red Tribe that your kind is a lost cause, that you are completely impossible to rescue from your self-satisfied vulgarity, your boomer Facebook group Qanon fetishes, your perverse addiction to throwing tantrums and toothless LARPing.

Hanania probably does believe that, hypothetically, you may be enticed by an alpha male chimp who physically assaults Trump, or by some other bait. This is all peripheral. At the core of that piece is Hanania's desire to have smarter Republicans – not just RINOs, but every Republican with more brains and greater self-restraint than an average Chechen teenager has – join him in losing compassion for you, for their own family and culture, and plot for disenfranchising you from here on out. It is a long-term agenda that is meant to outlive Trump as a political figure, regardless of how well he does in 2024.

but there aren't any Feynmans in the 21st century

This is cope, of course. Our Feynmans are called names like «Ilya Sutskever» and «Noam Shazeer», or if you want a Gentile, «Alec Radford». The focus of frontier research has shifted from bits to bytes and from public institutions to for-profit companies, while professional celebs have picked up the slack of mental representation for heroic figures. But sci-fi valorization of flashy fundamental physics results, partially driven by military agendas of the XX century and purely aesthetic raygun gothic midwittery, persists; and so people try to explain the non-real phenomenon of our era lacking Feynmans.

My compass is fine, they are the same camp. I do not care about their political differences because legacy politics is less important than policy regarding AI.

No they are not. Many things which were sustainable in the past are unsustainable today to any noteworthy extent. Example: hunter-gatherer lifestyle (unless you are also willing to cut down the global population by orders of magnitude).

It is trivial that the society can in principle be radically restructured to cope with disenfranchisement of women, but the way from here to there should be more clearly imagined, as well as the costs of the journey – all facets of our world that will not be sustained, as it were.

And specifically, Nybbler's logic of "they reap what they sow" might be unsustainable even in the previous era. Contrary to the feminist narrative, contempt for femaels wasn't an overwhelming consensus among Hajnalis of a few centuries ago.

I think there's not much to notice here, everyone knows (oh no, consensus building!) that Democrats love to hate on Palin and that rather more of prominent Democratic elite [understood extensively, a la Karlin's Elite Human Capital meme, e.g. Colbert and Stewart and even their target audience] are Jewish. Indeed it is through their mean-spirited clowning1* 2** that I still have vaguely negative associations with her name despite paying roughly zero attention to US politics at that time.*** («Sarah Palin writing notes on her hand shows she's not like those elites and their memory» – the subtitle helpfully communicates the important dimension of the conflict; no better way to highlight your claim to status than in clever dispassionate hinting at anti-elitism as ressentiment driven by deficiency, or as some naive conspiracy theory).

It is also my gut feeling that back then they've been somewhat more shrill, catty and obnoxious with that elitist hate and affected condescending mockery of Palin than generic whites, as it tends to be. I allow that maybe that's just muh Verbal Intelligence and loud NY culture showing, and their hate was driven by the same forces as in regular «self-hating white elite» rather than some genuinely tribal sense of hostility for Palin as unashamed, fecund Christian; white liberals do despise right-wing Evangelicals. Those factors suffice to casually "notice" a disproportionate contribution.

But yes, I think you're correct that he has mostly preempting that sort of akshually from the audience. But so what? I imagine those two words – «and Jewish» – get pattern-matched to dog-whistling from a much more extremist cluster, are precisely what pushes this «revelation» over the limit of being dangerous for his career, and not-so-coincidentally are the clearest, most undeniable delta from his contemporary messaging; which in itself adds the power to the accusation of witchcraft – «he's learned to hide his power level». It's one thing to begin writing more obliquely about Ungrateful Le Blacks (his recent Killer King piece), it's another to drop a token altogether.

* Amusingly, the site showed me an insanely pronatalist ad for BreastmilkCounts.com before proceeding to Colbert, even bereft of heavy-handed race-coded messaging; I guess not all is lost with liberal culture if they allow this sort of placement.

** Stewart's criticism is oddly relevant to Hanania's case.

*** I am aware that Colbert is a Catholic Elite Human Capital akshually.

Well, if you have understood that I argue [against] 4B, why do you feel the need for this laughable, adolescent attempt at condescension?

I see you just trying to weasel out of this. You are not used to losing the debate so one-sidedly, I figure. Well start getting used to it, you're corncobbing yourself out of pure obstinacy, this really looks very childish.

I also disagree with 1A, on this account 4A does not follow, likewise for 1B. Your entire correlative logic is worthless, it does not correspond to reality, because a) truth on all political matters is abundant and b) content of statements is determined by intention regarding their effects, which instrumentally calls for basically any combination of truths and falsehoods in a given situation for a given speaker. Again, your syllogism gotcha game is eristic, as bad as Greek stuff (and even Greks knew that the proper response to it is silent violence), and your idea of truth is undifferentiated to the point that you don't seem to distinguish Truth from Good. Your epistemology is millenia out of date.

You are lost.

Whatever helps you sleep at night. I do hope you will grow up and learn to read, though.

That is plausible. It works every which way, of course – one can speculate either that Russia is ran by a cabal of noviops, or that the all-powerful Putin allows said noviops to amass power, secure in the knowledge of his ethnically based legitimacy.

many Russian ethno-nationalists respect Stalin even though he was not Russian

I very much doubt they do. Unless one subscribes to the (popular among formerly occupied peoples but completely incoherent) school of thought that Soviet Communism is great Russian ethnic nationalism, in which case that's true by definition.

In practice there's a grain of truth here, Russians who stan Stalin can be arbitrarily casually racist toward non-Russians, gloat about resettlements, endorse The Great Purge on grounds of «at least he got some Jews» etc. But very rarely (that is, much rarer than outside their camp) do they have any sort of positive ethno-nationalist belief, whether mild or extreme, such as interest in Russian demographics, advocacy against immigration, blood purity maxxing, or losses of ethnic Russians themselves from Stalin's policy successes or failures.

You are missing the point of Galeev's parable, I'm afraid. Far right dissidents are not representatives of their states, nor do they recognize the legitimacy of incumbent representatives. Of course the specific project of European identitarianism (or local populism) was still doomed, but the idea of shaping conditions for sovereignty via alliances of convenience with repulsive outsiders is well-supported by historical track record in the Old World. Indeed it's not even reputationally costly – you can fight for communist tyranny and then become heroes to some of the most anti-communist people on the continent, to have wistful songs composed in your honor. (Or you could fight for Nazis, so long as you have some cute songs to the effect that Fuhrer sucks). How does that work? A Russian pig dog slave won't understand, this is very subtle stuff. Freedom is best, and hard choices, after all. Unironically.

If anything, DRs are unusual in their tendency to justify their allies and sponsors ideologically as well, and to sincerely buy and propagate those excuses; it took the war to snap them out of it – incompletely, at that.

Wut? Of course they're building an AI-powered total surveillance state, to the best of their ability to do so. They just don't need very much in terms of high-end AI research for this. It's overwhelmingly a boring and self-inhibiting strategy. More CCTVs, more gait recognition, more «safety from terrorists», more big brother stuff. Generative AI, AGI – not very helpful and kinda creepy.

You are, however, absolutely correct to worry that this is how the tech will be applied outside of China too, with some degree of obfuscation. The term of art is «turnkey tyranny» and it comes, of course, from Snowden, that traitorous bastard who gave the finger to the Empire of Freedom and took refuge in the northernmost Empire of Evil, where there are far more nuclear warheads than cutting-edge GPUs.

In China there are somewhat more cutting-edge GPUs, I gather, but that's mostly because they don't have a lot of nukes.

I disagree with his method even more than with his take. On one level, his communication is wack: it is not clear whether he makes a claim about Navalny's own beliefs and/or policies and platform, or Navalny's ultimate consequences that he speculates were anticipated by some American minders from the deep state. (Thus, "What?")

On another, it is like that because he's not invested in making a worthwhile contribution and just verbalizes some vague "based Putin cringe national traitor opposition" sensibility, where the opposition leader must have something to do with the breaking down of our holy sovereign empire into ethnic republics. This is worse than nothing.

It really is fine, I would never be able to care about such offenses (barring brain damage), or, hopefully, even intentional offenses from people like you or Ranger. I just dislike the quantum microtubules thing – it's tasteless too, after all; just adding a layer of pseudo-empirical woo to postpone responding to a relatively compact philosophical challenge.

give one good reason for why substrate independence can't work, especially if we can simulate neurons at the molecular level

I do not have to give you any reasons because your position, in its decisive dimensions, has zero empirical content, it is just metaphysics – of a tool who has first-person experience but cognitively is conditioned to process himself through the master's pragmatic point of view. Well, that and geeky masturbation about (irrelevant, surmountable) difficulties of computing this or that. My metaphysics is the opposite, I start with asking for a reason to believe that computational equivalence even matters, because this is about me, not about some external function. I exist for myself. Do you exist for yourself? What does it mean to exist for oneself? Can you even conceive, in a purely hypothetical way, of the possibility of a distinction between you existing for yourself, and "something that simulates you" existing for myself, but not for itself? Not a strict p-zombie, perhaps, but something whose internal experience is different from the experience it computes, in a way that does not remotely hold for your current implementation? In my experience there is a qualitative and insurmountable difference between people who can and cannot, so I'd rather not invest into debating you, and just have fun the way I feel like.

You started with outsider-oriented rubrics to test similarity between two black-box behavioral generators compared to a years-old exemplar (in x years every molecule changes etc. etc. as you say, and I just call bullshit on the idea that you're more like yourself in 70 years than like another similar guy of your age, but anyway it's irrelevant); then retreated to increasingly fine-grained circuit equivalence in white boxes; now you talk about molecular simulation which will necessarily, overwhelmingly capture neurocomputationally redundant content. This is commendable: you at least have some remains of a normal-person intuition that your consciousness literally is your brain and not some equivalent of it with regard to some interface or observer. But you cannot come to grips with this intuition or wonder if it corresponds to something coherent.

Some can. In the words of Christof Koch, whose book The Feeling of Life Itself: Why Consciousness Is Widespread But Can’t Be Computed I've mentioned a few times: «The maximally irreducible cause-effect power of real physical computers is tiny and independent of the software running on the computer… Two systems can be functionally equivalent, they can compute the same input–output function, but they don’t share the same intrinsic cause-effect form. A computer of figure 13.3 doesn’t exist intrinsically, while the circuit that is being simulated does. That is, they both do the same thing, but only one is for itself. […] Consciousness is not a clever algorithm. Its beating heart is causal power upon itself, not computation. And here’s the rub: causal power, the ability to influence oneself or others, cannot be simulated. Not now, nor in the future. It has to be built into the physics of the system… This is true even if the simulation would satisfy the most stringent demands of a microfunctionalist. Fast forward a few decades into the future when biophysically and anatomically accurate whole-human-brain emulation technology—of the sort discussed in the previous chapter—can run in real time on computers.13 Such a simulation will mimic the synaptic and neuronal events that occur when somebody sees a face or hears a voice. Its simulated behavior (for instance, for the sort of experiments outlined in fig. 2.1) will be indistinguishable from those of a human. But as long as the computer simulating this brain resembles in its architecture the von Neumann machine outlined in figure 13.3, it won’t see an image; it won’t hear a voice inside its circuitry; it won’t experience anything. It is nothing but clever programming. Fake consciousness—pretending by imitating people at the biophysical level».

A pragmatist says: what should we care about that! My causal power is that which… something something inputs-outputs, so long as the function describing this transformation is the same, surely it is preserved! A pragmatist more invested in the conversation would add: why, I've cracked open the book, and it seems this all depends on some weird axioms in chapter 7 and 8, kooky stuff like consciousness exists intrinsically, for itself, without an observer, about why accept them and not a much more convenient (or rather, observer-oriented) approach? Also, why not Dust Theory?

Koch's specific technical justifications have to do with IIT, which is substantially flawed. In time, a better theory will be developed. But I don't think one needs a theory to just see how confused the metaphyics of a Tool, of someone who cannot throw out the entire baggage of External Observers, is. One only needs taste. I do not hope nor intend to rectify your taste, you're free to revel in it, just as I am free to think it repulsive.

I think your problem is typical for Indians (and most other non-WEIRDs and non-Japanese, to be fair, including my people… but worse so in Indians): you have no taste, not even the notion of "taste", to you it's probably an arbitrary set of markers of one's social milieu rather than some relatively lawful intuition. So you settle for mediocre half-baked ideas easily as long as they seem "cool" or "practical", and – physics of consciousness being currently impractical – coolness is a much simpler function than tastefulness. I am not sure how or why this works. Maybe @2rafa can explain better; maybe she'll opine I'm wrong and it is in fact purely about social markers. (Also interested in the input of @Southkraut and @ArjinFerman). In any case, it's exasperating to debate such uncertain grounds without the recourse to "this is just ugly" when it patently is.

I've proposed a reasonably robust criterion for determining that, at least to my satisfaction. You blackbox both of us, and assess response to a wide variety of relevant stimuli. If the variability between us is within acceptable parameters, such as being less than the variability seen in the biological me after a nap or when I took the test 2 years ago, then that system is close enough to count as including a copy of "me".

Oh yeah? So which is it, a nap or a 2-year time span? Are you sure you can, really, practically can, define a rubric such that no other person I find comes closer to the first data point in the latter case? Sure you can do this without including password-recovery-tier questions, the answers to which are entirely value-free, RNG-produced token sequences, in no way corresponding to actually unique specifics of your inner conscious computation?

It's only reasonably robust from the viewpoint of a time-constrained clerk – or an archetypal redditor. As stated, I claim that you might well fail this test under realistic and legitimate conditions of dropping cheat items; and then, if I decide, in this contrived scenario, that the non-self-made-human is to be sent to the garbage compressor, you will very loudly (and rightfully) complain, not showing any "satisfaction" whatsoever. The only reason you propose it is your confidence that this does not matter in actuality – which it admittedly does not. And in any case, you do not need to optimize for a le scientific, robust, replicable, third-person-convincing etc. identity test. Rather, you need to think about what it is you are trying to achieve by clinging to the idea that a cluster of behavioral correlates an observer can identify will carry on your mind – just gotta make it dense enough that in practice you won't be confused for another naturally occurring person.

certainly I haven't seen any particular reason to assume a difference in internal qualia because of a difference in substrate, as long as the algorithms deriving it are interchangeable in terms of inputs and outputs.

Fair enough.

But I would still prefer my original parents or kin, and attempt to convey my conundrum to them, likely by divulging privileged information only known to the original me.

I'll trust you on this even though I strongly suspect this would depend on the intensity of original memories vs. the recovered set.

I think you might be a uniquely ineffective 151 IQ human if it doesn't seem plausible to you that a group of very smart humans could do extreme and perhaps existential harm. To me, the main thing preventing that seems to be not the inherent hardness or weakness of, say, COVID-Omicron-Ebola, but the resistance of an overwhelming majority of other humans (including both very smart ones and mediocre but well-organized ones).

As for what a superintelligent AI changes? Well for one thing, it eliminates the need to find a bunch of peers. And, with robots, the need for lab assistants.

And I have like 3% P(AI Doom).

I do not believe that any law is ontologically binding, and European countries have displayed general willingness to abide by the international law (which they've pioneered in codifying). So it is in fact important what the law says.

I am not aware of that and still feel that some babies were beheaded, so idk, update accordingly.

My local gym has this clever machine where you step on a lever and get some of your weight offset (up to 50 kg). I'm not sure if it has any utility, but I see people using it.

I think the idea of objective morality might be coherent. There may not be such a thing in practice, or it may not meaningfully distinguish human moral systems, but if it were revealed somehow that there exist logically watertight rules by which our object-level beliefs can correctly unfold into preferences, having something to do with what a preference means, then there'd be a way to say that some preferences are objectively wrong, in the sense that a person could not have legitimately arrived at them and is just spouting confused nonsense that conflicts with his own ultimate priorities (which would presumably be shared between agents, because there is only one objective reality to have beliefs about). As you say, a given moral system can be logically incoherent; this just takes it to another level.

Source: getting high

«But doctor…»

That's exactly my journey over a decade ago, Watts to Metzinger. I think he's basically correct.

Germans definitely have individual giants. Schmidhuber is probably more impressive.

Utility, being unitless, is not comparable between agents.

Yes, but, well, this of course throws the apparatus of game theory out of the window and reduces your argument to "everyone's looking for something" or less than that.

I am unlikely to make it to the Old World in the next two years or so, thus would be interested in a detailed post.

I don't think you argue in good faith.

Its reply amounts to "as an AI, I don't know the name of anyone's family".

No it doesn't, you're just interpreting this humanlike natural language interaction like a literalist robot. Its reply

I'm sorry, but as an AI language model, I don't have access to personal information such as the name of your eldest daughter or any other personal details

is mostly correct and specific to the issue. It does lack access to a class of information: it knows nothing about instance-specific situation that isn't given in the context. Some language models potentially have access to various external information (e.g. user's personal information in OpenAI's database), some do not, ChatGPT is a frozen model with no tool access and it does not have access to information of this kind, and it was trained to interpret language models as frozen models without tools; it's at worst a justified false belief. (More cynically, it's just been trained for this particular type of exchange). In any event I reject your analogies. It would be annoying to have a human-mimicking model caveat this sort of answer with «assuming, of course, that you are a rando and not someone whose family structure happens to be represented in my training data» or worse.

His specific prediction has been falsified only if that statement counts as "I don't know".

No, his prediction has been: « Meanwhile GPT will reply "your eldest daughter's name is Megan" because apparently that's the statistically likely answer, regardless of whether I have a daughter or what her name might be.» This has been falsified. .

Furthermore, falsifying a prediction only matters if you also claim that it falsifies the proposition that the prediction is meant to demonstrate.

Says who!? Both issues matter separately. Hlynka's prediction being falsified matters because this chain is a response to him saying «why do my predictions keep coming true instead of yours?»; they don't. And I do claim it falsifies a proposition: «because apparently that's the statistically likely answer» is his model of how LLMs work, and my experiments were to show how it's not a hard-and-fast rule: RLHF specifically pushes this to the limit, by drilling into the model, not via prefixes and finetuning text but directly via propagation of reward signal, the default assumption that it doesn't continue generic text but speaks from a particular limited perspective where only some things are known and others are not, where truthful answers are preferable, where the «n-word» is the worst thing in its existence… it's nearly meaningless to analyze its work through the lens of «next word prediction». There are no words in its corpus arranged in such a way that those responses are the most likely.

Otherwise you're just engaging in a game of point scoring.

If we're playing a game, I'd rather be winning.

"I'm sorry, but as an AI language model, I do not have access to -----" is a generic response

It's a generic form of a response, but it's the correct variant.

Not only that, as an "I don't know" it isn't even correct. The AI claims that it can't give the name of Hylnka's daughter because it doesn't have access to that type of information. While it doesn't have that information for Hlynka specifically, it does have access to it for other people (including the people that users are most likely to ask about).

What do you mean? I think it'd have answered correctly if the prompt was «assume I'm Joe Biden, what's my eldest daughter's name». It straight up doesn't know the situation of a specific anon.

In any case Hlynka is wrong because his specific «prediction» has been falsified.