@DaseindustriesLtd's banner p

DaseindustriesLtd

late version of a small language model

65 followers   follows 27 users  
joined 2022 September 05 23:03:02 UTC

Tell me about it.


				

User ID: 745

DaseindustriesLtd

late version of a small language model

65 followers   follows 27 users   joined 2022 September 05 23:03:02 UTC

					

Tell me about it.


					

User ID: 745

I really recommend reading the «I have no mouth and I must scream», or at least the synopsis.

Well, it adds the important dimension of unfriendly AI being spiteful and making a friend-enemy distinction. Actually, let's just scratch the whole alignment part and say Conjecture or Anthropic announce that, should they build a God-like AI, they'll go after people who tried to get in their way and torture them, while rewarding allies with extra share of the light cone. If they were very close to success, this would've been straightforwardly persuasive.

humans have a moral skeleton, innate hardwiring that allows us to learn morality

I suppose we have, to some extent, but it can't be all that robust. Tons of species are psychopathic by our standard, and of course this standard exists to distinguish humans who don't fit it. So it's more like a slight inductive bias, in the same way we have biases to learn to navigate in 3D space and prefer sugary foods. Biases of the algorithm can be substituted with biases in the data.

and believe it (as opposed to mimic it). This is highly instrumentally non-convergent and probably needs to be coded into an AI directly; gradient descent on output will only produce lying psychopaths mimicking morality.

I don't see why that would be true. Indeed, I do not see why gradient descent wouldn't be much better of learning deep cognitive regularities including morality. You seem to hold that morality is something essential, some set of terminal value-influences, but why is that true for morality and not any other aspect of our cognition, both instrumentally worthwhile and «instrumentally non-convergent» ones? Every part of our decision-making feels profoundly qualitatively colored for us.

Why is "coded directly" better than learned? The major reason we're doing this stuff is that it's vastly better at generalization,

Sorry, this looks like a vibe-based argument, where neural nets deceptively "imitate" and hard code is "good and honest". It's all algorithms. Inasmuch as human minds are computable, our morality is an algorithm too.

GOFAI has some hope because we could code morality directly

What good would that do? It'd break OOD just the same, and if it didn't break, it'd be rewritten or worked around by the purported daemon of optimization.

But I'm not seeing any hope of success on non-uploads without the ability to look inside the box. This is because "is moral" and "is pretending to be moral successfully" have identical output except in situations where dropping the pretence is worth it i.e. situations where there's a high chance of you losing control upon betrayal.

Reminder that LLMs cannot learn to do the «treacherous turn» because the dreaded SGD mercilessly ablates cognitive routines that do not contribute to decreasing loss in training. This, of course, holds in the general case.

But even beyond that, outputs may be similar but activations aren't, we know how to look at activations, and we know there are differences between the model subjectively evaluating its output as true or false.

it seems like the difficulty of determining whether spaghetti-code does X is generally at least as high as the difficulty of writing code that does X, which implies that making safe NNs is at least as hard as writing GOFAI

No, generation is always vastly simpler than classification unless you require classification that reconstructs the process of generation, of course.

I've long held that Yuddist program is, in addition to all else, an attractor for a particular variation of anxiety/OCD disorder: fetishization of «being in control», of making thoughts play by «proper rules». But it's fetishization because it doesn't really work, it pursues ghosts, precisely the deceptive external form of reliability. You gain clarity of ideas by testing them against evidence, not by being real suspicious of disconfirmations.

Pitts wrote that his depression might be “common to all people with an excessively logical education who work in applied mathematics: It is a kind of pessimism resulting from an inability to believe in what people call the Principle of Induction, or the principle of the Uniformity of Nature. Since one cannot prove, or even render probable a priori, that the sun should rise tomorrow, we cannot really believe it shall.”

This malady is to be treated, not nurtured.

i don't believe cows or chickens possess a meaningful capacity to suffer. pigs probably suffer more

How does this work on the substrate level? You may like pigs more or whatever but they're clearly organisms on the same level of sophistication as cows. (Naturally humans are not far off from either)

Yes, I'm doing what I… feel like doing to preclude that outcome, although it is suboptimal to waste effort in terms of remaining expected utility, by my count. Now, I'm not a utilitarian. But normal influences of reward and punishment do mold my actions, so I don't feel like doing a whole lot.

The go-to pivotal action to avoid torture, on the other hand, is obvious enough and reliable. I don't put much stock in the Basilisk.

If you don't have children and want to become a transhumanist immortal being, you shouldn't trust me

Which I won't, but more due to your rabid tribalism and clear bad faith than these differences. I'll note that I've always wanted to and still hope to have a large traditional family besides living forever as an uplifted human (the question of whether this, combined with my values and probable tolerance for further self-alteration, would initiate a slide into profound non-humanity and disconnect has concerned me since, like, age 6), but that's neither here nor there.

Self-serving? of course! So are all of your positions.

No. If you admit this, you concede that your arguments about «stake» are disingenuous. I do not have to concede anything of this sort.

DaeschIndustries and Chrispratt, seem stupified and angry at the idea that I might endorse the third guy, at the expense of the other two because this isn't dEmOcRaTic. I have my values and want to see them survive. Democracy is not a terminal value.

I also don't worship democracy. The point of my comment about democracy is that there is no agreeable external standard of a «good vision». Everything resolves either with a negotiated consensus or with a power competition that ends in more or less disagreeable and unequal compromises. We don't have power struggles here, so you've got to argue why your offer is better even by the standards of others. Perhaps you can address their higher-order values, showing why your standards allow for those to be satisfied better. Maybe you can offer some concession. Doubling down on asserting that your stuff is gooder and you are gooder is not productive.

Most irritatingly, there's a clever bait and switch with definitions of stake you use.

Here, you claim that your vision advances the common good simply because it is… good. Also aligned with people you agree with and whose satisfaction is more important by your account. So it's a «stake» not in a future where humanity thrives, but in the particular future with a version of thriving you prefer for your internal reasons, in a word – a preference. Okay. Naturally everyone thinks his preferred values are the best, else he'd have abandoned them. But this is just circular. This isn't a serious attempt to persuade: you ask that your collective values be respected (and in practice, you clearly hope to preclude the realization of other values), and if your numbers are sufficient, you demand that they be given supremacy. (You also clearly desire incompatibility – with the presumption your party will come out on top and snuff out others – because you find other visions morally abhorrent, a negative irrespective of contingent factors; you have a stake not simply in the future where baseline humans can peacefully exist, but where others cannot. But that's okay too. Most people this serious about religion are genocidal in their heart of hearts, I think, and for the most part they can behave themselves).

However, in your original comment, you did try to persuade. You argued that your political preferences, and those of other parents, are inherently more deserving of trust because your values and traits, chiefly having children (and wanting yourself and them to die, for whatever reason), give you «a stake» in the common long-term flourishing of humanity: according to this logic, you have skin in the game and it gives you an incentive to make more responsible choices than others, in this context, apparently wrt AI progress. This is how I understand e.g. the following.

I certainly hope this is just my bias showing, but I have greater fear for Altman types running the show than Musks because they are a few extra steps removed from stake in future civilization. We know that Musk wants to preserve humanity for his children and his grandchildren. Can we be sure that's anymore than an abstract good for Altman?

I counter that this is bad psychology. Why would Altman (or me, or selfmadehuman, or even fruitier types in my list above) have less of a subjective stake? If he personally intends to be present indefinitely, he totally has a massive stake; we aren't debating whether his plan will work out but simply whether his idea of his stake in the future motivates him to act responsibly to effect less risky outcomes for the common good, in this case lesser odds of a rogue AI wiping out humanity like Eliezer fears (it sounds improbable that a misaligned AI would wipe out everyone but Altman; I'll leave the topic of Altman-aligned omnicidal singleton aside, though it is important in its own right).

Perhaps your brain is overloaded with oxytocine and so you feel that, since Altman doesn't have children like you do, he cannot act seriously: children are obviously (to you) the most valuable existence in the world, more important to you than you are, and Altman is not tethered to anything as important. I can easily believe that Altman cares more about his livelihood than you do about your entire family combined, and thus has a greater «stake». In any case, this is just psychological speculation about the magnitude of perceived value from humanity not getting whacked. I cannot look into your head any more than I can look into Altman's. I could also argue that Christians cannot be serious consequentialists, nor give much of a shit about preventing Apocalypse ≈indefinitely, and their stake is phony since the whole premise of their faith is eternal blissful immortality conditional on faithfulness to some deontological rules; so even Altman with his assumed materialistic egoism is more reliable. I won't, because this is an entirely worthless line of debate.

Can you appreciate the difference and why equivocation between those senses of the stake would irritate?

More mundanely, the society simply respects parents because through their procreation it perpetuates itself (also because this signals some baseline competence, under non-dysgenic conditions at least); and parents are hardwired to egoistically demand a greater share of the common pie – a greater stake, one could say – on behalf of their progeny, cowardly submit to any intimidation when that seems to protect their children, psychotically denigrate, belittle and rip off childless people (who end up feeling irrational shame) and do other silly things. This might be necessary for the system to work and, in fact, I've recommended doubling down on such uncouth behaviors.

Personally I am constitutionally incapable of feeling shame for being correct, though.

(Note I'm not Misha, although I lean towards endorsing his «leak»).

I am not sure this has even happened. Any independent replications?

But if it did, they probably used a more complex approach explicitly built for text-heavy workloads, like adding the OCR perceptual loss from OCR-VQGAN.

Or something else entirely. They have a vision team after all.

I gave those reasons, Israel is an important ally against Iran

It is indeed unclear that the US needs an ally against Iran, or that there are significant reasons to continuously antagonize Iran, probably the highest human capital, oil-rich Muslim state, that do not amount to protecting Israeli interests.

You treat the US as the decision-making party here. IMO that's clearly wrong: Israel decides on the basis of its interests, and the US rationalizes support provided under lobbyist pressure.

I think Israel can and should stop building settlements

It's telling that the official US position seems to be the same, yet it is so glaringly impossible to effect change that the topic is barely brought up. The same logic applies to the gag order on Israeli nukes.

US-Israeli relation is not a reciprocal alliance. Americans have about as much reason to fight Iran as Belarus to invade Ukraine. And it's as laughable to pretend that the US supports Israel out of geopolitical self-interest as to explain Iraq war with oil.

I already have information about the world I'm in. It's a world where comfortable immortality is far away and out of reach for me. Your argument is backwards, most of the probability mass with conscious humans will be in those world's where immortality is nice and easy, but I know which world I live in now. I am embodied in time right now.

Consider that you aren't 100% sure of being a reliable narrator, and that the uncertainty, however minuscule, is greater than odds of spontaneous physical miracles – as per @sodiummuffin's logic. Conditional on you invariably ending up alive, you will... not have had experienced lethal harms that cannot be survived without magic; and if it very convincingly looks to you as if you had experienced them, well, maybe that was just some error? A nightmare, a psychedelic trip, a post-singularity VR session with memory editing...

I woke up today from a realistic dream where I got crippled and blinded by a battery pack explosion. In its (and, in a sense, my own) final moments, I consciously chose the alternate reality relative to which that world was a dream, focused my awareness, and realized that this has happened many times before – in other worlds I had escaped by simply waking up into this one. (This reminded me: I've never read Carlos Castaneda but he probably wrote about this stuff? Sent me on a binge. Yeah, that's one of his topics, mages jumping between apparent universes that should be ontologically unequal).

Dreams aside, I feel like the idea of quantum immortality is unfortunately all tangled up with the idea of observer effect. As per QI, you aren't immortal across the board – you die, and soon, in the vast majority of timelines observed by any other consciousness, just like all humans who have died before our time. You are, right now, in a timeline you observe (though as noted above, only probably) – and presumably you aren't yet dying any more than any other person who's exposed to normal risks and aging. The idea is that you do indeed die in those scenarios where you eat an explosion, develop malignant tumors, are lying in a dump bleeding out all alone with no chance of survival, or are 80 years old in 1839; but those are counterfactuals, not real timelines, and the you who doesn't die, the person typing those comments, doesn't get into them. If it looks to you as if you did, and QI is right – you being wrong is more likely than a miracle.

See edits.

Anyway, what I'm asserting is that the presence of bluster of this kind is not enough to disqualify the proposition as «uniformly just nonsense». Something like 25% of Uriah's conjectures may well prove correct. If I were to bet on it, I'd say that brachycephalization-domestication thesis, episodic memory vs. «stamp collecting» adaptations and the bit about Oceanian quivering smell less like bullshit than Japanese Omega-3 one, but all of it is within the realm of sane academic hypotheses, if not Overton-compliant ones.

Milk Lobe is... controversial in my mind.

Well Galton got further than 90% of modern social scientists (whose received wisdom still looks like this) with just that plus a bit of math. Directionally the same, it seems, happened for the whole Hajnal line discourse, Indian Aryan issue and other topics. We've been hearing a lot of cackling from the wannabe sophisticateds about stupid racist nazi chuds obsessed with foreheads and brow heights or using CaLiPeRS to reach conclusions about intelligence; but time and time again it seems like calipers work to an extent. So I think there are grounds for cautious optimism about this approach.

Pre-20th century, or perhaps more to the point, pre-Civil Rights guys were less technically informed but also less mindkilled and could reason freely on the basis of what is now unattainable purity of real experience; explicitly rechecking and refining their intuitions with modern tools could be a legitimate way to revitalize anthropology in the broadest sense.

That said, wilder schizoposter accs are merely riffing off the aesthetic of gentlemen scientists, if not scholars of the occult.

«Schizoposters» with more or less believable hot takes about human biodiversity and essential qualities of «races» are prevalent in the esoteric right Twitter, chiefly in BAP's circles. The greatest among them all was Hakan Rotwrmt (RIP), and MIYA BLACK HEARTED CYBER ANGEL BABY was a close second; both were apparently collective accounts ran by extraordinarily witty people.

See our resident spammer Carlsbad (?) burrowing into the BAP network.

I haven't read his most recent thread - but his output before that was uniformly just nonsense, and there wasn't any truth or value in it whatsoever, not even in an 'adjacent to truth' sense.

On that basis are you saying this? His inferences are straining credulity, but he cites genuine data.

Why?

It makes sense for the definition to precede the concrete example, both on the scale of the paragraph and the whole post.

It's quoted verbatim from the Etymology section of that page, though.

I do not believe that you like Trump on the account of economic indicators during his tenure, nor do you try to make it plausible.

Leaving probable nutritional deficiencies etc. aside: the next generation will have an IQ of like 120, and the third one, I'd bet, of 119.6. Regression to the mean has nothing to do with evolutionary pressures, it's just the issue of resetting beneficial non-hereditary effects (which we assume explain 30-50% of the deviation from median phenotype in these particular specimens). It's not some abstract global mean but just the mean of the island population's genetic value for the trait. Cochran himself explained this well to Edge, in a kinder era:

The kids get the good additive genes, but have average "environmental" luck—so their average IQ is 110. The luck (10 pts worth) goes away.

The 120-IQ parents drawn from the IQ-85 population have 35 extra IQ points, half from good additive genes and half from good environmental luck. But in the next generation, the luck goes away… so they drop 17.5 points.

The next point is that the luck only goes away once. If you took those kids from the first group, with average IQs of 110, and dropped them on a friendly uninhabited island, they would eventually get around to mating—and the next generation would also have an IQ of 110. With tougher selection, say by kidnapping a year’s worth of National Merit Finalists, you could create a new ethny with far higher average intelligence than any existing. Eugenics is not only possible, it’s trivial.

(...Does he actually hope to do this?)

In the long run, the trait may well be watered down, of course – unless they discover some fitness peak that normie island populations couldn't get to because of all the valleys; I think Scott had a short story on brilliant island eugenicists?

But this happens because of purifying selection.

And people tell me that Maurice Samuel is not 100% correct...

Oh come on, this is more American whining. Muh deaths of overdoses, muh Russian election meddling, little old us assaulted on all fronts, won't somebody please spare a thought for the poor hegemon.

The CHIPS act has been about pork and the usual fighting over the spoils from the beginning, its success or failure is of no consequence. China was summarily cut off from modern semiconductor manufacturing and falls behind, new fabs in safe allied countries are being completed, Taiwan is getting reinforced, and AGI seems to be on schedule within 5 years. Yes, could have been done better. But it has gone well enough that advancing petty political agendas took precedence. If there ever is any plausible risk of the US losing control over the global high-end manufacturing chain, I am sure you'll see it going differently.

I disagree with his method even more than with his take. On one level, his communication is wack: it is not clear whether he makes a claim about Navalny's own beliefs and/or policies and platform, or Navalny's ultimate consequences that he speculates were anticipated by some American minders from the deep state. (Thus, "What?")

On another, it is like that because he's not invested in making a worthwhile contribution and just verbalizes some vague "based Putin cringe national traitor opposition" sensibility, where the opposition leader must have something to do with the breaking down of our holy sovereign empire into ethnic republics. This is worse than nothing.

I still clicked upvote because I can't wrap my mind around Hlynka's Ilyad and this explanation is satisfactory.

Because you can't program virtue ethics into an AI. You need a utility function.

I am not even sure Yudkowsky would argue this. In any case this is not defensible unless you think that virtue ethics is in principle not computable.

You're fine in my book. And 'sophistication' has very little to do with what I take to be their failures in taste.

That said, sadly you wouldn't have had much to worry in any case; and I think people most likely to ascend first have next to no taste.

I think your problem is typical for Indians (and most other non-WEIRDs and non-Japanese, to be fair, including my people… but worse so in Indians): you have no taste, not even the notion of "taste", to you it's probably an arbitrary set of markers of one's social milieu rather than some relatively lawful intuition. So you settle for mediocre half-baked ideas easily as long as they seem "cool" or "practical", and – physics of consciousness being currently impractical – coolness is a much simpler function than tastefulness. I am not sure how or why this works. Maybe @2rafa can explain better; maybe she'll opine I'm wrong and it is in fact purely about social markers. (Also interested in the input of @Southkraut and @ArjinFerman). In any case, it's exasperating to debate such uncertain grounds without the recourse to "this is just ugly" when it patently is.

I've proposed a reasonably robust criterion for determining that, at least to my satisfaction. You blackbox both of us, and assess response to a wide variety of relevant stimuli. If the variability between us is within acceptable parameters, such as being less than the variability seen in the biological me after a nap or when I took the test 2 years ago, then that system is close enough to count as including a copy of "me".

Oh yeah? So which is it, a nap or a 2-year time span? Are you sure you can, really, practically can, define a rubric such that no other person I find comes closer to the first data point in the latter case? Sure you can do this without including password-recovery-tier questions, the answers to which are entirely value-free, RNG-produced token sequences, in no way corresponding to actually unique specifics of your inner conscious computation?

It's only reasonably robust from the viewpoint of a time-constrained clerk – or an archetypal redditor. As stated, I claim that you might well fail this test under realistic and legitimate conditions of dropping cheat items; and then, if I decide, in this contrived scenario, that the non-self-made-human is to be sent to the garbage compressor, you will very loudly (and rightfully) complain, not showing any "satisfaction" whatsoever. The only reason you propose it is your confidence that this does not matter in actuality – which it admittedly does not. And in any case, you do not need to optimize for a le scientific, robust, replicable, third-person-convincing etc. identity test. Rather, you need to think about what it is you are trying to achieve by clinging to the idea that a cluster of behavioral correlates an observer can identify will carry on your mind – just gotta make it dense enough that in practice you won't be confused for another naturally occurring person.

certainly I haven't seen any particular reason to assume a difference in internal qualia because of a difference in substrate, as long as the algorithms deriving it are interchangeable in terms of inputs and outputs.

Fair enough.

But I would still prefer my original parents or kin, and attempt to convey my conundrum to them, likely by divulging privileged information only known to the original me.

I'll trust you on this even though I strongly suspect this would depend on the intensity of original memories vs. the recovered set.