@DaseindustriesLtd's banner p

DaseindustriesLtd

late version of a small language model

65 followers   follows 27 users  
joined 2022 September 05 23:03:02 UTC

Tell me about it.


				

User ID: 745

DaseindustriesLtd

late version of a small language model

65 followers   follows 27 users   joined 2022 September 05 23:03:02 UTC

					

Tell me about it.


					

User ID: 745

The specific way India sucks even among very low-income nations, as seen through Western eyes, has little to do with poverty (there are very poor places in the world indeed) – and everything with its spiritual pollution, the lack of taste and disgust that finds root in your religious iconography and fully generalizes to contemporary ideas and beliefs; the physical squalor you can buy your way out of, but the rest, you will happily elevate into prestige. You are blind to the non-materialist dimension of the suckiness. No, I will not elaborate.

Perhaps this «opposition to skilled immigration» is not about skill, nor even primarily about race, but is specifically opposition to sociopathic, uncompromising immigration that immediately sides with one's political enemies and gloats about disempowering legacy population.

If anyone reads this, you may explain to them how such an opposition is illegitimate or founded on alien moral precepts.

Did your English fail you? Or is this some subtler issue with failing to assimilate into the society and morality of Earthlings after your alt-historical non-tribalist India?

It's an outpost of my civilization, organized on principles I agree with, inhabited by people I could live at ease with. They defend themselves, their borders, their interests, with the kind of vigor and thick-skinned determination I'd like to see my nation display. (If only!) I admire them and wish them well.

John Derbyshire most likely cannot live in Israel or among a representative sample of Israelis any more than he can live among immigrants to the West he hates so much.

I increasingly suspect that the only correct decision for any sane person is converting to Judaism or at the very least relinquishing any claim to being white, because Christian whites are just brain-damaged and cannot tell a universalist ideology (even "nationalism") from a population's game-theoretically advantageous modus operandi. Strong vibe of round-headed Slavic Hitlerists.

It is absurd to assume that a more competent entity sharing your material interests is your ally rather than a competitor.

Can you say that you don't know in enough detail how a transformer (and the whole modern training pipeline) works, thus can't really know whether it knows anything in a meaningful way? Because I'm pretty sure (then again I may be wrong too…) you don't know for certain, yet this doesn't stop you from having a strong opinion. Accurate calibration of confidence is almost as hard as positive knowledge, because, well, unknown unknowns can affect all known bits, including values for known unknowns and their salience. It's a problem for humans and LLMs in comparable measure, and our substrate differences don't shed much light on which party has it inherently harder. Whether LLMs can develop a structure that amounts to meta-knowledge necessary for calibration, and not just perform well due to being trained on relevant data, is not something that can just be intuited from high-level priors like "AI returns the most likely token".

What does it mean to know anything? What distinguishes a model that knows what it knows from one that doesn't? This is a topic of ongoing research. E.g. the Anthropic paper Language Models (Mostly) Know What They Know concludes:

We find that language models can easily learn to perform well at evaluating P(IK), the probability that they know the answer to a question, on a given distribution… In almost all cases self-evaluation performance improves with model size, and for our 52B models answers labeled with P(True) > 50% are far more likely to be correct as compared to generic responses…

GPT-4, interestingly, is decently calibrated out of the box but then it gets brain-damaged by RLHF. Hlynka, on the other hand, is poorly calibrated, therefore he overestimates his ability to predict whether ChatGPT will hallucinate or reasonably admit ignorance on a given topic.

Also, we can distinguish activations for generic output and for output that the model internally evaluates as bullshit.

John Schulman probably understands Transformers better than either of us, so I defer to him. His idea of their internals, expressed in the recent talk on RL and Truthfulness is basically that that they develop a knowledge graph and a toolset for operations over that graph; this architecture is sufficient to eventually do good at hedging and expressing uncertainty. His proposal to get there is unsurprisingly to use RL in a more precise manner, rewarding correct answers, correct hedges somewhat, harshly punishing errors, and giving 0 reward for admission of ignorance.

I suppose we'll see how it goes.

To put it bluntly, I do not feel like pandering to misogynistic copes of people like Aaronson, who imagine themselves "romanceless" or "nice guys" rather than unsettling, mentally unwell, pathetically unmanly and, yes, plain ugly nerds. He is a loser. But on the whole, less of a loser than a penniless drunk conscript who'll get his dick blown off by a Ukrainian suicide drone.

I can only congratulate him for making it to a safe environment and finding a woman who looks past those biological drawbacks and loves him for who he is: a high-IQ prosocial academic with a badly coordinated, potbellied body of a paranoid bullying victim attached.

Oh come on, this is more American whining. Muh deaths of overdoses, muh Russian election meddling, little old us assaulted on all fronts, won't somebody please spare a thought for the poor hegemon.

The CHIPS act has been about pork and the usual fighting over the spoils from the beginning, its success or failure is of no consequence. China was summarily cut off from modern semiconductor manufacturing and falls behind, new fabs in safe allied countries are being completed, Taiwan is getting reinforced, and AGI seems to be on schedule within 5 years. Yes, could have been done better. But it has gone well enough that advancing petty political agendas took precedence. If there ever is any plausible risk of the US losing control over the global high-end manufacturing chain, I am sure you'll see it going differently.

Cynicism is a cope of dysfunctional people.

Thank you for providing an example. Yes, the point of voting Trump is to Own The Libs, drive them mad. This is exactly what Hanania is talking about.

Like Mitt "the Mormon Theocrat" Romney? Being nice and clean-cut never stopped the attacks from the Democrats

I suppose we will never know how well Romney would've handled those attacks were he to become POTUS.

Check with Claude-instant. It's the same architecture and it's vastly better at factuality than Hlynka.

How's it going my dude?

I don't really want to entangle those but might respond to a DM.

I think my reasons are justified for years now, and you have not appropriately expressed remorse for your terrible faux-rationalist rhetoric back on reddit before continuing it here, so I'd rather you start making arguments in good faith than I stop reminding people of your track record. By the way, putting on a layer of patronizing Brahmin politeness improves your performance somewhat, but not by a lot, I suggest you leave it for your workplace.

the way to argue against it

There is no need to argue against a vacuous truism that something is "true" if we assume an arbitrary classification system where it is true; only an infertile mind of a pedant can be satisfied with such an argument. I of course agree that beauty is in the eye of the beholder and in principle some people can recognize whatever as beautiful, even a slum filled with diseased drug addicts where one gets to read Scholze's papers. And Wittgenstein masturbated to mathematical problems in the trenches of World War I, after all. But the reason we know of this fact is precisely that it is unusual, baffling. Your preferences, if those are your preferences indeed, are marginal, thus prioritizing them would be unsatisfactory to most people; you're free to put the opposite hypothesis to a democratic test and, I'd expect, get the same result as here in any group except self-selected minority of mathematical ascetics.

Though there may be something more here. Naturally this ties in to your general theme that white people ought to welcome being replaced by "elite human capital" like yourself. White people did not think to implement a filter for compatible aesthetic preferences, even as they demanded professional merit and some minimum of adherence to the letter of the law; so now there'll be a marginally greater tendency for their habitats to approximate the noisy, debased squalor of the subcontinent, complete with galaxy-brained equivocation excusing this. Too bad.

I do think that a rationally acting misaligned superintelligent AI, which is not a nonsense concept, will not see a reason to engage in spiteful behavior a la Basilisk, and also that the specific sort of irrationality that would make it spiteful is highly improbable to emerge as a result of AI research. But it's not logically absurd, in the way that timeless decision theory powering the vanilla Basilisk is; and if it were for some reason interested in that sort of stuff, I think it'd have been able to torture humans for at least centuries. It'd probably also be able to upload humans or construct random simulations and torture them, for whatever reason.

Condescension is merited if you're asking for it; I mentioned IHNMAIMS in my post and this scenario is materially different from «you might be tortured and killed by a crazy person» in that it posits biological immortality.

I happen to think that a superintelligent being will at the very least be able to greatly prolong the life of its victim, this doesn't conflict with any part of our understanding of «the matter at hand» (we're pretty sure that a healthy brain can outlive the body if you provide it a decent environment). And of course this conjecture, while unproven, is vastly less far-fetched than continuity of identity between a human and a simulacrum built from indirect evidence.

I really recommend reading the «I have no mouth and I must scream», or at least the synopsis.

Yes, it's an interesting data point. Now, consider that rabbits have only one move in response to myxomatosis: die. Or equivalently: pray to Moloch that he has sent them a miraculously adaptive mutation. They can't conceive of an attack happening, so the only way it can fail is by chance.

Modern humans are like that in some ways, but not with regard to pandemics.

Like other poxviruses, myxoma viruses are large DNA viruses with linear double-stranded DNA.

Myxomatosis is transmitted primarily by insects. Disease transmission commonly occurs via mosquito or flea bites, but can also occur via the bites of flies and lice, as well as arachnid mites. The myxoma virus does not replicate in these arthropod hosts, but is physically carried by biting arthropods from one rabbit to another.

The myxoma virus can also be transmitted by direct contact.

Does this strike you as something that'd wipe out modern humanity just because an infection would be 100% fatal?

Do you think it's just a matter of fiddling with nucleotide sequences and picking up points left on the sidewalk by evolution, Pandemic Inc. style, to make a virus that has a long incubation period, asymptomatic spread, is very good at airborne transmission and survives UV and elements, for instance? Unlike virulence, these traits are evolutionarily advantageous. And so we already have anthrax, smallpox, measles. I suspect they're close to the limits of the performance envelope allowed by relevant biochemistry and characteristic scales; close enough that computation won't get us much closer than contemporary wet lab efforts, and so it's not the bottleneck to the catastrophe.

Importantly, tool AIs – which, contra Yud's predictions, have started being very useful before displaying misaligned agency – will reduce the attack surface by improving our logistics and manufacturing, monitoring, strategizing, communications… The world of 2025 with uninhibited AI adoption, full of ambient DNA sensors, UV filters, decent telemedicine and full-stack robot delivery, would not get rekt by COVID. It probably wouldn't even get fazed by MERS-tier COVID. And seeing as there exist fucking scary viruses that may one day naturally jump to, or be easily modified to target humans, we may want to hurry.

People underestimate the potential vast upside of a early Singularity economics, that which must be secured, the way a more productive – but still recognizable – world could be more beautiful, safe and humane. The negativity bias is astounding: muh lost jerbs, muh art, crisis of meaning, corporations bad, what if much paperclip. Boresome killjoys.

(To an extent I'm also vulnerable to this critique).

But my real source of skepticism is on the meta level.

Real-world systems rapidly gain complexity, create nontrivial feedback loops, dissipative dynamics on many levels of organization, and generally drown out propagating aberrant signals and replicators. This is especially true for systems with responsive elements (like humans). If it weren't the case, we'd have had 10 apocalyptic happenings every week. It is a hard technical question whether your climate change, or population explosion, or nuclear explosion in the atmosphere, or the worldwide Communist revolution, or the Universal Cultural Takeover, or the orthodox grey goo, or a superpandemic, or a stable strangelet, or a FOOMing superintelligence, is indeed a self-reinforcing wave or another transient eddy on the surface of history. But the boring null hypothesis is abbreviated on Solomon's ring: יזג. Gimel, Zayin, Yud. «This too shall pass».

Speaking of Yud, he despises the notion of complexity.

This is a story from when I first met Marcello, with whom I would later work for a year on AI theory; but at this point I had not yet accepted him as my apprentice. I knew that he competed at the national level in mathematical and computing olympiads, which sufficed to attract my attention for a closer look; but I didn’t know yet if he could learn to think about AI.

At some point in this discussion, Marcello said: “Well, I think the AI needs complexity to do X, and complexity to do Y—”

And I said, “Don’t say ‘_complexity_.’ ”

Marcello said, “Why not?”

… I said, “Did you read ‘A Technical Explanation of Technical Explanation’?”

“Yes,” said Marcello.

“Okay,” I said. “Saying ‘complexity’ doesn’t concentrate your probability mass.”

“Oh,” Marcello said, “like ‘emergence.’ Huh. So . . . now I’ve got to think about how X might actually happen . . .”

That was when I thought to myself, “_Maybe this one is teachable._”

I think @2rafa is correct that Yud is not that smart, more like an upgraded midwit, like most people who block me on Twitter – his logorrhea is shallow, soft, and I've never felt formidability in him that I sense in many mid-tier scientists, regulars here or some of my friends (I'll object that he's a very strong writer, though; pre-GPT writers didn't have to be brilliant). But crucially he's intellectually immature, and so is the culture he has nurtured, a culture that's obsessed with relatively shallow questions. He's stuck on the level of «waow! big number go up real quick», the intoxicating insight that some functions are super–exponential; and it irritates him when they fizzle out. This happens to people with mild autism if they have the misfortune of getting nerd-sniped on the first base, arithmetic. In clinical terms that's hyperlexia II. (A seed of an even more uncharitable neurological explanation can be found here). Some get qualitatively farther and get nerd-sniped by more sophisticated things – say, algebraic topology. In the end it's all fetish fuel, not analytic reasoning, and real life is not the Game of Life, no matter how Turing-complete the latter is; it's harsh for replicators and recursive self-improovers. Their formidability, like Yud's, needs to be argued for.

I do not see why the existential of potential entities that "emulate" me in such a theoretical fashion precludes me from caring about the more prosaic/physical instantiations.

That's because you fail to seriously ask yourself what the word "computation" means (and likewise for other relevant words). A given computation's outputs are interpreted one way or another with regard to a decoder, but your approach makes the decoder and in fact the decoding irrelevant: you claim, very confidently, that so long as some entity, no matter how inanely arranged, how fragmented in space and time, "computes you" (as in, is made up of physical elements producing events which can be mapped to bit sequences which, together with other parts of this entity and according to some rules, can be interpreted as isomorphic with regard to your brain's processes by some software), it causes you to exist and have consciousness – if in some subordinate fashion. Of course it is indefensible and ad hoc to say that it does not compute you just because we do not have a decoder ready at hand to make sense of and impose structure on its "output bits". It is insane to marry your beliefs to a requirement for some localized, interpretable, immediately causal decoding – that's just watered-down Integrated Information Theory, and you do not even deign to acquaint yourself with it, so silly it seems to you!

And well, since (for the purpose of your untenable computational metaphysics ) entities and their borders can be defined arbitrarily, everything computes you all the time by this criterion! We do not need a Boltzmann brain or any other pop-sci reference, and indeed it has all been computed already. You, as well as every other possible mind, positively (not hypothetically, not in the limit of the infinite physics – your smug insistence on substrate independence ensures it) have always been existing in all possible states. As such, you do not get to ask for epsilon more.

Either concede that you have never thought about this seriously, or concede that you do not have a legitimate claim to any amount of control over the first-order physical substrate of the Universe since it is not meaningfully privileged for a strict computationalist. Or, really, we can just stop here. At least I will.

Once again, I do not care to enlighten you, you've been given enough to work with, only hubris and shit taste stops you from reading Koch or grown-up philosophy.

As for Dust Theory, it's been a while since I read half of Permutation City. But I fail to see how it changes anything, my subjective consciousness wouldn't notice if it was being run on abacuses, meat or a supercomputer, or asynchronously. It doesn't track objective time. Besides, I sleep and don't lose sleep over that necessity, the strict linear passage of time is of no consequence to me, as long as it doesn't impede my ability to instantiate my goals and desires.

I've written a bunch, and deleted (your response to the issue of causal power was decisive). The long and short of it is that, being who you are, you cannot see the problem with Dust Theory, and therefore you do not need mind uploading – in the Platonic space of all possibilities, there must exist a Turing machine which will interpret, with respect to some hypothetical decoding software at least, the bits of your rotting and scattering corpse as a computation of a happy ascended SMH in a Kardashev IV utopia. That this machine is not physically assembled seems to be no obstacle to your value system and metaphysics which deny that physical systems matter at all; all that matters, according to you, is ultimate constructibility of a computation. From the Dust Theory perspective, all conceivable agents have infinite opportunity to 'instantiate their goals and desires'. Seeing that, I would ask and indeed try to prevent you from wasting the valuable (for me, a finite physical being) negentropy budget on frivolous and wholly unnecessary locally computed and human-specified simulations which only add an infinitesimal fraction of your preferred computations to the mix.

Call a bugman a bugman and see how he recoils etc.

As I've said already, "sophistication" is not what is needed to see your failures here. Specifically, the distinction between copy-pasting and transposition. Indeed, this is very trivial, children get it, until they are gaslit with sloppy computationalist analogies.

Grasping at straws.

humans have a moral skeleton, innate hardwiring that allows us to learn morality

I suppose we have, to some extent, but it can't be all that robust. Tons of species are psychopathic by our standard, and of course this standard exists to distinguish humans who don't fit it. So it's more like a slight inductive bias, in the same way we have biases to learn to navigate in 3D space and prefer sugary foods. Biases of the algorithm can be substituted with biases in the data.

and believe it (as opposed to mimic it). This is highly instrumentally non-convergent and probably needs to be coded into an AI directly; gradient descent on output will only produce lying psychopaths mimicking morality.

I don't see why that would be true. Indeed, I do not see why gradient descent wouldn't be much better of learning deep cognitive regularities including morality. You seem to hold that morality is something essential, some set of terminal value-influences, but why is that true for morality and not any other aspect of our cognition, both instrumentally worthwhile and «instrumentally non-convergent» ones? Every part of our decision-making feels profoundly qualitatively colored for us.

Why is "coded directly" better than learned? The major reason we're doing this stuff is that it's vastly better at generalization,

Sorry, this looks like a vibe-based argument, where neural nets deceptively "imitate" and hard code is "good and honest". It's all algorithms. Inasmuch as human minds are computable, our morality is an algorithm too.

GOFAI has some hope because we could code morality directly

What good would that do? It'd break OOD just the same, and if it didn't break, it'd be rewritten or worked around by the purported daemon of optimization.

But I'm not seeing any hope of success on non-uploads without the ability to look inside the box. This is because "is moral" and "is pretending to be moral successfully" have identical output except in situations where dropping the pretence is worth it i.e. situations where there's a high chance of you losing control upon betrayal.

Reminder that LLMs cannot learn to do the «treacherous turn» because the dreaded SGD mercilessly ablates cognitive routines that do not contribute to decreasing loss in training. This, of course, holds in the general case.

But even beyond that, outputs may be similar but activations aren't, we know how to look at activations, and we know there are differences between the model subjectively evaluating its output as true or false.

it seems like the difficulty of determining whether spaghetti-code does X is generally at least as high as the difficulty of writing code that does X, which implies that making safe NNs is at least as hard as writing GOFAI

No, generation is always vastly simpler than classification unless you require classification that reconstructs the process of generation, of course.

I've long held that Yuddist program is, in addition to all else, an attractor for a particular variation of anxiety/OCD disorder: fetishization of «being in control», of making thoughts play by «proper rules». But it's fetishization because it doesn't really work, it pursues ghosts, precisely the deceptive external form of reliability. You gain clarity of ideas by testing them against evidence, not by being real suspicious of disconfirmations.

Pitts wrote that his depression might be “common to all people with an excessively logical education who work in applied mathematics: It is a kind of pessimism resulting from an inability to believe in what people call the Principle of Induction, or the principle of the Uniformity of Nature. Since one cannot prove, or even render probable a priori, that the sun should rise tomorrow, we cannot really believe it shall.”

This malady is to be treated, not nurtured.

i don't believe cows or chickens possess a meaningful capacity to suffer. pigs probably suffer more

How does this work on the substrate level? You may like pigs more or whatever but they're clearly organisms on the same level of sophistication as cows. (Naturally humans are not far off from either)

I do not report you because I practically never do but I would like to ask you not to use basic slurs. I don't care about it but others may care and this poisons the site for neophytes. Let's not slide into /r/CultureWarRoundup even faster.

On the other hand, opposition to skilled immigration seems super common amongst HBD people here

Aren't you that South Asian rationalist guy who audaciously implied that ethnocentrism is an entirely alien notion to you since you're not white, and that it is impossible to have «reasoned debate» with people who don't want you to immigrate to their countries, because they have «blue-and-orange morality», so the only way to deal with them is censorship?

You sure are good at assimilating: you can learn English and slatestarcodex lingo and whatever else is needed to «pass». I'm sure you pride yourself on this ability to mimic superficial markers of a cooperating agent. But what matters is not how much you look the part: such «assimilation» is not worth more than changing skin color. What matters is actually, you know, cooperating, including respect for host's values, even irrational ones.

I've already said all I had to say about you years ago.

Perhaps this «opposition to skilled immigration» is not about skill, nor even primarily about race, but is specifically opposition to sociopathic, uncompromising immigration that immediately sides with one's political enemies and gloats about disempowering legacy population.

If anyone reads this, you may explain to them how such an opposition is illegitimate or founded on alien moral precepts.

(On another note, it's really funny how @HlynkaCG has corncobbed himself with his philosophical notion of woke Neo-Nazis and other clever inversions. Will we see him arguing that DEI values follow from literalist interpretation of the Constitution and are more American than apple pie, if another moderately suave progressive happens to aid him in his dunking on woke HBDers?)

It sucks as much as any country in its socioeconomic class.

No.

You cannot understand how much India sucks.

Maybe the next generation will. Or the one after that.

To every Indian, I have the same advice. Everything is a dick measuring contest. Grow a bigger (economic) penis. Your economic penis is the only thing that matters.

Wrong too, but expected.