@DaseindustriesLtd's banner p

DaseindustriesLtd

late version of a small language model

74 followers   follows 27 users  
joined 2022 September 05 23:03:02 UTC

Tell me about it.


				

User ID: 745

DaseindustriesLtd

late version of a small language model

74 followers   follows 27 users   joined 2022 September 05 23:03:02 UTC

					

Tell me about it.


					

User ID: 745

How's it going my dude?

Cynicism is a cope of dysfunctional people.

It's an outpost of my civilization, organized on principles I agree with, inhabited by people I could live at ease with. They defend themselves, their borders, their interests, with the kind of vigor and thick-skinned determination I'd like to see my nation display. (If only!) I admire them and wish them well.

John Derbyshire most likely cannot live in Israel or among a representative sample of Israelis any more than he can live among immigrants to the West he hates so much.

I increasingly suspect that the only correct decision for any sane person is converting to Judaism or at the very least relinquishing any claim to being white, because Christian whites are just brain-damaged and cannot tell a universalist ideology (even "nationalism") from a population's game-theoretically advantageous modus operandi. Strong vibe of round-headed Slavic Hitlerists.

It is absurd to assume that a more competent entity sharing your material interests is your ally rather than a competitor.

I don't really want to entangle those but might respond to a DM.

Isn’t that the straightforward point of the gyges story?

The point of Gyges story is that, as I have said, «morality does not exist and does not differ from conventional etiquette in some substantial objective sense». It is not a singular real thing that exists outside us, it is an arbitrary system of behavioral guidelines that differ between groups and individuals. In the same post, I give my perspective on what we have learned about subjective morality and its difference from mere etiquette since Plato. Like most people, I have subjective morality that is essentially intuitive and deontological. Greeks, in Plato's cynical imagination, believed that «the best is to do injustice without paying the penalty», such was their intuitive internal moral compass, with prosociality a burdensome etiquette indeed. I live millenia later, and my idea of the highest good is different, because I am a product of biological and cultural selection pressures rewarding other priors; i have internalized a version of prosociality to a certain extent. I am not closer to some objectively established moral truth on account of this, I'm just me, doing and perpetuating what I like for contingent historical reasons. I could collapse those philosophical complexities into do's and don'ts – and I do, when speaking to children. But if being actually mush-headed and deluded about this fact is a criterion for admission into the Goodbot Society, instead of being attacked as a defector… Well, then I can only despise goodbots harder, see them as even more of immature cattle with heritable neural crest defects, and, against my better judgement, wish them more abuse at the hands of actual defectors – who delight in mouthing mawkish falsehoods, and who reshape your moral code as they see fit through revolting identity-coded morality plays that you myopically mistake for entertainment and education.

We can pick a yardstick like game-theoretical performance (assuming utility can be compared between agents) and compare moral codes with that as our source of objectivity, but this would depend on evaluation conditions; people can argue persuasively that even antisocial punishment is rational, and thus good. The issue with selecting game theory itself as one's moral truth is that you'd need widespread near-omniscience to get out of some horrible local minima, while defection-heavy heuristics remain robust (as do cooperation-heavy ones, while conditions are not horrible). Maybe there exists a perfect solution that maximizes winning across all environments even for avg 100 IQ humans dealing with incomplete information about reward scheme and unknown number of sophisticated defectbots in the environment. I do not know it, but even if it did exist, it'd only be objectively correct given our arbitrarily selected (but very handy) benchmark; and for me asesthetics matter about as much. Truth, of course, is also a matter of aesthetics.

This should clarify my position enough.

Although I’m on the record condoning the murder of any and all authority figures in nazi germany, so if I was who you think I am, we don’t even disagree on that

Good. Hopefully you realize that I take seriously what you say about your methodology for evaluating «signals», not as an edgy maximalist posture (even though you throw that in too). I know people who blithely speak and live by your code where success at attaining power justifies itself, and consider them being alive and unbothered my personal failing, as well as collective failure of my civilization.

I think you could benefit from adopting a less subjective view of morality

Well, here's what I think: you can admit that you are indeed no better than Nazi authority figures, or admit your «signals» methodology is laughably bad and does not result in nice conclusions like conventional Western morality as you want it to so should be ditched; or, indeed, just stop.

so I’ll make this the last time we talk

Good luck.

https://youtube.com/watch?v=65zOlQV1qto&t=1854

Looking back at it I've read too much into his words. I do think it's a possible interpretation though.

@Amadan is it fair to call this gibberish? I suppose he can't be thinking he makes a solid point.

I think behavioral zombies defined as such are just not interesting in the age of LLMs. It doesn't take much to fool people.

A subtler hypothetical subtype of a behavioral zombie that actually precisely matches a specific person's behavior – that is not pre-recorded but generated by zombie's own causality in the same situations – might be interesting though, and I think amounts to the neurological one, or contains it somehow.

Grasping at straws.

In this case, the LLM could perfectly well be a P-zombie

No.

An LLM does not, as far as we know, employ an actual physical human brain for computation. A [strong version of] p-zombie does, its causal chains are exactly the same as in our brans, it not an arbitrary Turing-test-passing AI. I think that it "feels like something" to be an LLM computation too, but it very likely doesn't feel like having human quale.

It is obviously unwarranted to say that a system that can ape a human with its behaviors computes a human mind or any part thereof, humans can have low standards among other reasons. And in general, our external behaviors are a low-dimensional lossy and noisy projection of our internal states, so the latter cannot be fully inferred from the former, at least in realistic time (I think).

My argument hinges on the fact that a brain contains events that, from an information perspective, suffice to be described as quale with regard to other events (that are described as sensations). It is coherent to speculate that e.g. there is such a thing as an immaterial human soul and that it does not parse these events, and instead works in some other way. It is not coherent to say that they exist but also don't exist.

Thank you for providing an example. Yes, the point of voting Trump is to Own The Libs, drive them mad. This is exactly what Hanania is talking about.

Like Mitt "the Mormon Theocrat" Romney? Being nice and clean-cut never stopped the attacks from the Democrats

I suppose we will never know how well Romney would've handled those attacks were he to become POTUS.

Condescension is merited if you're asking for it; I mentioned IHNMAIMS in my post and this scenario is materially different from «you might be tortured and killed by a crazy person» in that it posits biological immortality.

I happen to think that a superintelligent being will at the very least be able to greatly prolong the life of its victim, this doesn't conflict with any part of our understanding of «the matter at hand» (we're pretty sure that a healthy brain can outlive the body if you provide it a decent environment). And of course this conjecture, while unproven, is vastly less far-fetched than continuity of identity between a human and a simulacrum built from indirect evidence.

I really recommend reading the «I have no mouth and I must scream», or at least the synopsis.

Well, it adds the important dimension of unfriendly AI being spiteful and making a friend-enemy distinction. Actually, let's just scratch the whole alignment part and say Conjecture or Anthropic announce that, should they build a God-like AI, they'll go after people who tried to get in their way and torture them, while rewarding allies with extra share of the light cone. If they were very close to success, this would've been straightforwardly persuasive.

humans have a moral skeleton, innate hardwiring that allows us to learn morality

I suppose we have, to some extent, but it can't be all that robust. Tons of species are psychopathic by our standard, and of course this standard exists to distinguish humans who don't fit it. So it's more like a slight inductive bias, in the same way we have biases to learn to navigate in 3D space and prefer sugary foods. Biases of the algorithm can be substituted with biases in the data.

and believe it (as opposed to mimic it). This is highly instrumentally non-convergent and probably needs to be coded into an AI directly; gradient descent on output will only produce lying psychopaths mimicking morality.

I don't see why that would be true. Indeed, I do not see why gradient descent wouldn't be much better of learning deep cognitive regularities including morality. You seem to hold that morality is something essential, some set of terminal value-influences, but why is that true for morality and not any other aspect of our cognition, both instrumentally worthwhile and «instrumentally non-convergent» ones? Every part of our decision-making feels profoundly qualitatively colored for us.

Why is "coded directly" better than learned? The major reason we're doing this stuff is that it's vastly better at generalization,

Sorry, this looks like a vibe-based argument, where neural nets deceptively "imitate" and hard code is "good and honest". It's all algorithms. Inasmuch as human minds are computable, our morality is an algorithm too.

GOFAI has some hope because we could code morality directly

What good would that do? It'd break OOD just the same, and if it didn't break, it'd be rewritten or worked around by the purported daemon of optimization.

But I'm not seeing any hope of success on non-uploads without the ability to look inside the box. This is because "is moral" and "is pretending to be moral successfully" have identical output except in situations where dropping the pretence is worth it i.e. situations where there's a high chance of you losing control upon betrayal.

Reminder that LLMs cannot learn to do the «treacherous turn» because the dreaded SGD mercilessly ablates cognitive routines that do not contribute to decreasing loss in training. This, of course, holds in the general case.

But even beyond that, outputs may be similar but activations aren't, we know how to look at activations, and we know there are differences between the model subjectively evaluating its output as true or false.

it seems like the difficulty of determining whether spaghetti-code does X is generally at least as high as the difficulty of writing code that does X, which implies that making safe NNs is at least as hard as writing GOFAI

No, generation is always vastly simpler than classification unless you require classification that reconstructs the process of generation, of course.

I've long held that Yuddist program is, in addition to all else, an attractor for a particular variation of anxiety/OCD disorder: fetishization of «being in control», of making thoughts play by «proper rules». But it's fetishization because it doesn't really work, it pursues ghosts, precisely the deceptive external form of reliability. You gain clarity of ideas by testing them against evidence, not by being real suspicious of disconfirmations.

Pitts wrote that his depression might be “common to all people with an excessively logical education who work in applied mathematics: It is a kind of pessimism resulting from an inability to believe in what people call the Principle of Induction, or the principle of the Uniformity of Nature. Since one cannot prove, or even render probable a priori, that the sun should rise tomorrow, we cannot really believe it shall.”

This malady is to be treated, not nurtured.

i don't believe cows or chickens possess a meaningful capacity to suffer. pigs probably suffer more

How does this work on the substrate level? You may like pigs more or whatever but they're clearly organisms on the same level of sophistication as cows. (Naturally humans are not far off from either)

Yes, I'm doing what I… feel like doing to preclude that outcome, although it is suboptimal to waste effort in terms of remaining expected utility, by my count. Now, I'm not a utilitarian. But normal influences of reward and punishment do mold my actions, so I don't feel like doing a whole lot.

The go-to pivotal action to avoid torture, on the other hand, is obvious enough and reliable. I don't put much stock in the Basilisk.

If you don't have children and want to become a transhumanist immortal being, you shouldn't trust me

Which I won't, but more due to your rabid tribalism and clear bad faith than these differences. I'll note that I've always wanted to and still hope to have a large traditional family besides living forever as an uplifted human (the question of whether this, combined with my values and probable tolerance for further self-alteration, would initiate a slide into profound non-humanity and disconnect has concerned me since, like, age 6), but that's neither here nor there.

Self-serving? of course! So are all of your positions.

No. If you admit this, you concede that your arguments about «stake» are disingenuous. I do not have to concede anything of this sort.

DaeschIndustries and Chrispratt, seem stupified and angry at the idea that I might endorse the third guy, at the expense of the other two because this isn't dEmOcRaTic. I have my values and want to see them survive. Democracy is not a terminal value.

I also don't worship democracy. The point of my comment about democracy is that there is no agreeable external standard of a «good vision». Everything resolves either with a negotiated consensus or with a power competition that ends in more or less disagreeable and unequal compromises. We don't have power struggles here, so you've got to argue why your offer is better even by the standards of others. Perhaps you can address their higher-order values, showing why your standards allow for those to be satisfied better. Maybe you can offer some concession. Doubling down on asserting that your stuff is gooder and you are gooder is not productive.

Most irritatingly, there's a clever bait and switch with definitions of stake you use.

Here, you claim that your vision advances the common good simply because it is… good. Also aligned with people you agree with and whose satisfaction is more important by your account. So it's a «stake» not in a future where humanity thrives, but in the particular future with a version of thriving you prefer for your internal reasons, in a word – a preference. Okay. Naturally everyone thinks his preferred values are the best, else he'd have abandoned them. But this is just circular. This isn't a serious attempt to persuade: you ask that your collective values be respected (and in practice, you clearly hope to preclude the realization of other values), and if your numbers are sufficient, you demand that they be given supremacy. (You also clearly desire incompatibility – with the presumption your party will come out on top and snuff out others – because you find other visions morally abhorrent, a negative irrespective of contingent factors; you have a stake not simply in the future where baseline humans can peacefully exist, but where others cannot. But that's okay too. Most people this serious about religion are genocidal in their heart of hearts, I think, and for the most part they can behave themselves).

However, in your original comment, you did try to persuade. You argued that your political preferences, and those of other parents, are inherently more deserving of trust because your values and traits, chiefly having children (and wanting yourself and them to die, for whatever reason), give you «a stake» in the common long-term flourishing of humanity: according to this logic, you have skin in the game and it gives you an incentive to make more responsible choices than others, in this context, apparently wrt AI progress. This is how I understand e.g. the following.

I certainly hope this is just my bias showing, but I have greater fear for Altman types running the show than Musks because they are a few extra steps removed from stake in future civilization. We know that Musk wants to preserve humanity for his children and his grandchildren. Can we be sure that's anymore than an abstract good for Altman?

I counter that this is bad psychology. Why would Altman (or me, or selfmadehuman, or even fruitier types in my list above) have less of a subjective stake? If he personally intends to be present indefinitely, he totally has a massive stake; we aren't debating whether his plan will work out but simply whether his idea of his stake in the future motivates him to act responsibly to effect less risky outcomes for the common good, in this case lesser odds of a rogue AI wiping out humanity like Eliezer fears (it sounds improbable that a misaligned AI would wipe out everyone but Altman; I'll leave the topic of Altman-aligned omnicidal singleton aside, though it is important in its own right).

Perhaps your brain is overloaded with oxytocine and so you feel that, since Altman doesn't have children like you do, he cannot act seriously: children are obviously (to you) the most valuable existence in the world, more important to you than you are, and Altman is not tethered to anything as important. I can easily believe that Altman cares more about his livelihood than you do about your entire family combined, and thus has a greater «stake». In any case, this is just psychological speculation about the magnitude of perceived value from humanity not getting whacked. I cannot look into your head any more than I can look into Altman's. I could also argue that Christians cannot be serious consequentialists, nor give much of a shit about preventing Apocalypse ≈indefinitely, and their stake is phony since the whole premise of their faith is eternal blissful immortality conditional on faithfulness to some deontological rules; so even Altman with his assumed materialistic egoism is more reliable. I won't, because this is an entirely worthless line of debate.

Can you appreciate the difference and why equivocation between those senses of the stake would irritate?

More mundanely, the society simply respects parents because through their procreation it perpetuates itself (also because this signals some baseline competence, under non-dysgenic conditions at least); and parents are hardwired to egoistically demand a greater share of the common pie – a greater stake, one could say – on behalf of their progeny, cowardly submit to any intimidation when that seems to protect their children, psychotically denigrate, belittle and rip off childless people (who end up feeling irrational shame) and do other silly things. This might be necessary for the system to work and, in fact, I've recommended doubling down on such uncouth behaviors.

Personally I am constitutionally incapable of feeling shame for being correct, though.

(Note I'm not Misha, although I lean towards endorsing his «leak»).

I am not sure this has even happened. Any independent replications?

But if it did, they probably used a more complex approach explicitly built for text-heavy workloads, like adding the OCR perceptual loss from OCR-VQGAN.

Or something else entirely. They have a vision team after all.

I gave those reasons, Israel is an important ally against Iran

It is indeed unclear that the US needs an ally against Iran, or that there are significant reasons to continuously antagonize Iran, probably the highest human capital, oil-rich Muslim state, that do not amount to protecting Israeli interests.

You treat the US as the decision-making party here. IMO that's clearly wrong: Israel decides on the basis of its interests, and the US rationalizes support provided under lobbyist pressure.

I think Israel can and should stop building settlements

It's telling that the official US position seems to be the same, yet it is so glaringly impossible to effect change that the topic is barely brought up. The same logic applies to the gag order on Israeli nukes.

US-Israeli relation is not a reciprocal alliance. Americans have about as much reason to fight Iran as Belarus to invade Ukraine. And it's as laughable to pretend that the US supports Israel out of geopolitical self-interest as to explain Iraq war with oil.

I already have information about the world I'm in. It's a world where comfortable immortality is far away and out of reach for me. Your argument is backwards, most of the probability mass with conscious humans will be in those world's where immortality is nice and easy, but I know which world I live in now. I am embodied in time right now.

Consider that you aren't 100% sure of being a reliable narrator, and that the uncertainty, however minuscule, is greater than odds of spontaneous physical miracles – as per @sodiummuffin's logic. Conditional on you invariably ending up alive, you will... not have had experienced lethal harms that cannot be survived without magic; and if it very convincingly looks to you as if you had experienced them, well, maybe that was just some error? A nightmare, a psychedelic trip, a post-singularity VR session with memory editing...

I woke up today from a realistic dream where I got crippled and blinded by a battery pack explosion. In its (and, in a sense, my own) final moments, I consciously chose the alternate reality relative to which that world was a dream, focused my awareness, and realized that this has happened many times before – in other worlds I had escaped by simply waking up into this one. (This reminded me: I've never read Carlos Castaneda but he probably wrote about this stuff? Sent me on a binge. Yeah, that's one of his topics, mages jumping between apparent universes that should be ontologically unequal).

Dreams aside, I feel like the idea of quantum immortality is unfortunately all tangled up with the idea of observer effect. As per QI, you aren't immortal across the board – you die, and soon, in the vast majority of timelines observed by any other consciousness, just like all humans who have died before our time. You are, right now, in a timeline you observe (though as noted above, only probably) – and presumably you aren't yet dying any more than any other person who's exposed to normal risks and aging. The idea is that you do indeed die in those scenarios where you eat an explosion, develop malignant tumors, are lying in a dump bleeding out all alone with no chance of survival, or are 80 years old in 1839; but those are counterfactuals, not real timelines, and the you who doesn't die, the person typing those comments, doesn't get into them. If it looks to you as if you did, and QI is right – you being wrong is more likely than a miracle.

See edits.

Anyway, what I'm asserting is that the presence of bluster of this kind is not enough to disqualify the proposition as «uniformly just nonsense». Something like 25% of Uriah's conjectures may well prove correct. If I were to bet on it, I'd say that brachycephalization-domestication thesis, episodic memory vs. «stamp collecting» adaptations and the bit about Oceanian quivering smell less like bullshit than Japanese Omega-3 one, but all of it is within the realm of sane academic hypotheses, if not Overton-compliant ones.

Milk Lobe is... controversial in my mind.

Well Galton got further than 90% of modern social scientists (whose received wisdom still looks like this) with just that plus a bit of math. Directionally the same, it seems, happened for the whole Hajnal line discourse, Indian Aryan issue and other topics. We've been hearing a lot of cackling from the wannabe sophisticateds about stupid racist nazi chuds obsessed with foreheads and brow heights or using CaLiPeRS to reach conclusions about intelligence; but time and time again it seems like calipers work to an extent. So I think there are grounds for cautious optimism about this approach.

Pre-20th century, or perhaps more to the point, pre-Civil Rights guys were less technically informed but also less mindkilled and could reason freely on the basis of what is now unattainable purity of real experience; explicitly rechecking and refining their intuitions with modern tools could be a legitimate way to revitalize anthropology in the broadest sense.

That said, wilder schizoposter accs are merely riffing off the aesthetic of gentlemen scientists, if not scholars of the occult.

«Schizoposters» with more or less believable hot takes about human biodiversity and essential qualities of «races» are prevalent in the esoteric right Twitter, chiefly in BAP's circles. The greatest among them all was Hakan Rotwrmt (RIP), and MIYA BLACK HEARTED CYBER ANGEL BABY was a close second; both were apparently collective accounts ran by extraordinarily witty people.

See our resident spammer Carlsbad (?) burrowing into the BAP network.

I haven't read his most recent thread - but his output before that was uniformly just nonsense, and there wasn't any truth or value in it whatsoever, not even in an 'adjacent to truth' sense.

On that basis are you saying this? His inferences are straining credulity, but he cites genuine data.