@Gillitrut's banner p

Gillitrut

Reading from the golden book under bright red stars

1 follower   follows 0 users  
joined 2022 September 06 14:49:23 UTC

				

User ID: 863

Gillitrut

Reading from the golden book under bright red stars

1 follower   follows 0 users   joined 2022 September 06 14:49:23 UTC

					

No bio...


					

User ID: 863

I suppose I'm one of the people @Quantumfreakonomics is describing in their post. The logic seems quite straightforward.

It is good that we have a rebuttable presumption that parents are acting in their child's best interests. Most of the time, they are! But when we have sufficient reason to believe they are not we should do the thing that is in the child's best interest, without regard to what the parent thinks. It is a similar logic that leads me to oppose laws that mandate reporting to parents when a child expresses the possibility they have an LGBT identity. The foremost concern is the health and well being of the child in question and how disclosure of that information will impact them.


As an aside, I'm interested in how these laws interact with the Full Faith and Credit clause. Anyone know of any litigation on this?

I am under the impression DR types are generally in favor of state-backed discrimination against racial minorities and LGBT individuals (ala Jim Crow laws). As well as in favor of policies restricting women's ability to participate in society and politics as equals to men.

However, in the end, I think they misjudge just how far the average Westerner, and particularly American, has moved away from them. People broadly suppost some level of immigration (and even a sharp reduction wouldn't head off "replacement"), don't think twice about interracial relationships, and like Jews. The white nationalist project of reimposing segregation is particularly baffling to me on logistical grounds alone.

I feel like this is an underappreciated point. I am under the impression that, within living memory, a great deal of the states of affairs dissident-right-types would like to return to obtained (in the US at least). Within living memory we had strong restrictions on immigration. Women and racial minorities were legally subordinate to white men. LGBT individuals were firmly in the closet across most of the country.

We transitioned from that state of affairs to the current one somehow. Even assuming we could get back to that state of affairs, what is going to prevent society from going through the same process again? Are women and racial minorities and LGBT people just going to accept their subordination this time? Are sympathetic white men going to somehow be prevented from gaining power? Of the 535 members of the 88th Congress, the one that passed the Civil Rights Act of 1964, a whole 4 were black and 14 were women, after all. Most of the people wielding political power to the benefit of women, minorities, LGBT people, whoever are straight white cis men!

Perhaps instead of the Dominion defamation case it was related to the Grossberg sexual harassment/discrimination lawsuit. Carlson is named as a defendant in that suit and it was similar suits that got Bill O'Reilly and Roger Ailes out at Fox News.

Was there a unique contribution that Jewish women made to feminism

Seems probable.

and if so, how would women's rights look today had there been minimal Jewish involvement?

Approximately identical.

This is my wife and I's position. We're child free by choice. I've even gotten a vasectomy to prevent this possibility. When I look at my friends or siblings who have had children it seems like having children has had a clear negative impact on their quality of life in terms of the things we care about. Hell, having a dog is almost more responsibility and imposition on the way we want to live our lives than we're willing to tolerate. Forget raising another human.

I think "people are by and large no longer raised in a memeplex that views having children as the terminal goal in life" is underrated as an explanation for why people no longer want to have children. It turns out when you tell people they should be able to live the kinds of lives they want to lots of people are no longer interested in having children!

And we aren't sure how to ensure that humans have a positive value in any AGI's utility function.

I feel like there is a more basic question here, specifically, what will an AGI's utility function even look like? Do we know the answer to that question? If the answer is no then it is not clear to me how we even make progress on the proposed question.

Is sure seems like 'unfriendly' 'unaligned' hypercompetition between entities is the default assumption, given available evidence.

I am not so sure. After all, if you want to use human evidence, plenty of human groups cooperate effectively. At the level of individuals, groups, nations, and so on. Why will the relationship between humans and AGI be more like the relation between humans in some purported state of nature than between human groups or entities today?

I don't know what you would accept as evidence for this if you are suggesting we need to run experiments with AGIs to see if they try to kill us or not.

I would like something more rigorously argued, at least. What is the reference class for possible minds? How did you construct it? What is the probability density of various possible minds and how was that density determined? Is every mind equally likely? Why think that? On the assumption humans are going to attempt to construct only those minds whose existence would be beneficial to us why doesn't that weigh substantial probability density towards the fact that we end constructing such a mind? Consider other tools humans have made. There are many possible ways to stick a sharp blade to some other object or handle. Almost all such ways are relatively inefficient or useless to humans in consideration of the total possibility space. Yet almost all the tools we actually make are in the tiny probability space where they are actually useful to us.

Their position being that it will take DECADES of concentrated effort to understand the nature of the alignment problem and propose viable solutions, and that it's has proven much easier than hoped to produce AGI-like entities, it makes perfect sense that their argument is "we either slow things to a halt now or we're never going to catch up in time."

Can you explain to me what an "AGI-like" entity is? I'm assuming this is referring to GPT and Midjourney and similar? But how are these entities AGI-like? We have a pretty good idea of what they do (statistical token inference) in a way that seems not true of intelligence more generally. This isn't to say that statistical token inference can't do some pretty impressive things, it can! But it seems quite different than the definition of intelligence you give below.

Where "intelligence" means having the ability to comprehend information and apply it so as to push the world into a state that is more in line with the intelligence's goals.

Is something like GPT "intelligent" on this definition? Does having embedded statistical weights from its training data constitute "comprehending information?" Does choosing it's output according to some statistical function mean it has a "goal" that it's trying to achieve?

Moreover on this definition it seems intelligence has a very natural limit in the form of logical omniscience. At some point you understand the implications of all the facts you know and how they relate to the world. The only way to learn more about the world (and perhaps more implications of the facts you do know) is by learning further facts. Should we just be reasoning about what AGI can do in the limit by reasoning about what a logically omniscient entity could do?

It seems to me there is something of an equivocation between being able to synthesize information and achieve one's goals going on under the term "intelligence." Surely being very good at synthesizing information is a great help to achieving one's goals but it is not the only thing. I feel like in these kinds of discussions people posit (plausibly!) that AI will be much better than humans at the synthesizing information thing, and therefore conclude (less plausibly) it will be arbitrarily better at the achieving goals thing.

The leap, there, is that a superintelligent mind can start improving itself (or future versions of AGI) more rapidly than humans can and that will keep the improvements rolling with humans no longer in the driver's seat.

What is the justification for this leap, though? Why believe that AI can bootstrap itself into logical omniscience (or something close, or beyond?) Again there are questions of storage and compute to consider. What kind of compute does an AI require to achieve logical omniscience? What kind of architecture enables this? As best I can tell the urgency around this situation is entirely driven by imagined possibility.

"Any AGI invented on earth COULD become superintelligent, and if it does so it can figure out how to bootstrap into godlike power inside a decade" is the steelmanned claim, I think.

Can I get a clarification on "godlike power"? Could the AI in question break all our encryption by efficiently factoring integers? What if there is no (non-quantum) algorithm for efficiently factoring integers?

"Godlike in the theoretical limit of access to a light cones worth of resources" and "godlike in terms of access to the particular resources on earth over the next several decades" seem like very different claims and equivocating between them is unhelpful. "An AI could theoretically be godlike if it could manufacture artificial stars to hold data" and "Any AI we invent on earth will be godlike in this sense in the next decade" are very different claims.

And given enough time and compute, one can figure out how to get closer and closer to this limit, even if by semi-randomly iterating on promising designs and running them. And each successful iterations reduces the time and compute needed to approach the limit. Which can look very foom-y from the outside.

How much time and how much compute? Surely these questions are directly relevant to how "foom-y" such a scenario will be. Do AI doomers have any sense of even the order of magnitude of the answers to these questions?

So the argument goes that there's an incredibly large area of 'mindspace' (the space containing all possible intelligent mind designs/structure). There's likewise an incredibly high 'ceiling' in mindspace for theoretical maximum 'intelligence'. So there's a large space of minds that could be considered 'superintelligent' under said ceiling.

Still unclear to me what is meant by "mindspace" and "intelligence" and "superintelligent."

And the real AI doomer argument is that the VAST, VAST majority (99.99...%) of those possible mind designs are unfriendly to humans and will kill them.

What is the evidence for this? As far as I can tell the available evidence is "AI doomers can imagine it" which does not seem like good evidence at all!

So the specific structure of the eventual superintelligence doesn't have to be predictable in order to predict that 99.99...% of the time we create a superintelligence it ends up killing us.

What is the evidence for this? Is the idea that our generation of superintelligent entities will merely be a random walk through the possible space of superintelligences? Is that how AI development has proceeded so far? Were GPT and similar algorithms generated through a random walk of the space of all machine learning algorithms?

And there's no law of the universe that convincingly rules out superintelligence.

So being unable to pluck a particular mind design out of mindspace and say "THIS is what the superintelligence will look like!" is not good proof that superintelligent minds are not something we could create.

Sure, I don't think (in the limit) superintelligences (including hostile ones) are impossible. But the handwringing by Yud and Co about striking data centers and restricting GPUs and whatever is absurd in light of the current state of AI and machine learning, including our conceptual understanding thereof.

What, in your mind, is the structure of "intelligence" in silicon entities such that such an entity will be able to improve it's own intelligence "likely rapidly" and perhaps without limit?

As best I can tell we have little understanding of what the physiological structure of intelligence is in humans and even less what the computational structure of intelligence looks like in silicon entities. This is not a trivial problem! Many problems in computing have fundamental limits in how efficient they can be. There are, for example, more and less efficient implementations of an algorithm to determine whether two strings are the same. There are no algorithms to determine whether two strings are the same that use no time and no memory.

What I would like to hear from AI doomers is their purported structure of intelligence for silicon entities such that this structure can be improved by those same entities to whatever godlike level they imagine. As best I can tell the facts about an AIs ability to improve its own intelligence are entirely an article of faith and are not determined by reference to any facts about what intelligence looks like in silicon entities (which we do not know).

And let's say that there's a person who is dying of starvation, because he has no job, because AI does everything better and cheaper than he can. Therefore, no one wants to come to him to do these tasks, because they'd rather go to the owner of the AI. How does this person get the money he needs to get the food he needs?

So, for this kind of situation to arise it needs to be the case that the marginal cost for providing this person the necessities of life is below the marginal value their labor can generate for others.

Notice there is nothing AI specific about this scenario. It can (and does) obtain in our society even without large scale AI deployment. We have various solutions to this problem that depend on a variety of factors. Sometimes people can do useful work and just need a supplement to bring it up to the level of survival (various forms of welfare). Sometimes people can't do useful work but society would still like them to continue living for one reason or another (the elderly, disabled, etc). The same kinds of solutions we already deploy to solve these problems (you mention some in your comment) would seem to be viable here.

It's also unclear to me how exactly AI will change the balance for a persons marginal value vs marginal cost. On the one hand the efficiency gains from AI mean that the marginal cost of provisioning the means of survival should fall. Whether directly due to the influence of AI or do to a reallocation of human labor towards other things. On the other hand it will raise the bar (in certain domains) for the marginal value one has to produce to be employed.

Partially this is why I think it will be a long term benefit but more mixed in the short term. There are frictions in labor markets and effects of specialization that can mean it is difficult to reallocate labor and effort efficiently in the short and medium term. But the resulting equilibrium will almost certainly be one with happier and wealthier people.

But honestly, what really happens if there's no more work left for people to do anymore?

That would be awesome! People (mostly) don't work because work is awesome and they want to do it. People work because there are things we want and we need to work to get the things we want. No work left for people to do implies no wants that could be satisfied by human labor.

It seems that we'd have to really count on some redistribution of wealth, UBI, etc to ensure that the gains of the new automation doesn't just go to the owners of the automation (as much as I never thought I'd ever say that), or else people simply will not have the means to support themselves.

This paragraph seems in tension with the idea of lacking work for people to do, to me. If a bunch of people are left with unfulfilled wants, why isn't there work for people to do fulfilling those wants? This also seems to ignore the demand side of economics. You can be as greedy a producer of goods as you want but if no one can afford to buy your products you will not make any money selling them.

Or if the job destruction is localized to just upper-class jobs, then everyone will have to get used to living like lower-class, and there may not even be enough lower-class jobs to go around.

I think there's an equivocation between present wages and standard of living to post-AI wages and standard of living that I'm not confident would actually hold. Certain kinds of jobs have certain standards of living now because of the relative demand for them and people's capability to do them and the costs of satisfying certain preferences etc. In a world with massively expanded preference satisfaction capability (at least along some dimensions) I'm not sure working a "lower-class" job will entail having what we currently think of as a "lower-class" standard of living.

The carrying capacity of society would be drastically reduced in either situation.

I'm a little unclear what the "carrying capacity of society" is and how it would be reduced if we had found a new way to generate a lot of wealth.

Not with respect to the fact that it will be net beneficial to humanity over the long run.

Is the rapid advancement in Machine Learning good or bad for society?

Over what time horizon?

I expect the deployment of machine learning to follow approximately the same path as every other labor saving technology humans have developed. In the short term it will be somewhat of a mixed bag. On the one hand we'll be able to produce the same/more goods at lower costs than before. On the other hand this savings will likely come with impact to the people and companies that used to produce those things. Over the long term I expect it will make people much better off.

They all have the attributes Aristostle and Confuscius independently identified them as having.

Such as?


For the rest of this comment I feel like I need some clarification on "the human condition", biology, and the relation between them. It seems to me humans already manage our biology in ways great and small with mostly positive results. The person with cataracts who gets surgery, the deaf person who gets a cochlear implant, the diabetic who takes insulin, the person with a lethal allergy, are all managing their biology. Sometimes with life or death implications!

So what parts of our biology does "the human condition" consist of such that we are incompetent to manage these parts?

I am unclear on what this human nature is. Humans seem very different to me all over the world such that it would be difficult to ascribe some specific nature to all of them.

I'm also of the opinion that part of this nature makes humans unwise, and certainly unwise enough that them being in charge of their own condition is the harbinger of catastrophe. We suck at planning, everything we do has unforeseen consequences and the Enlightenment, which is most essentially the project to organize the world using reason, is a massive failure.

Can you quantify the "humans" that are unwise enough such that being charge of our own condition is catastrophe? With an existential quantifier it seems trivial (surely some humans are so unwise it is catastrophic for them to manage their own condition) and with a universal quantifier it seems clearly false (no human is wise enough to manage their own condition). Indeed, unless you're an anarchist it seems like you believe some humans are wise enough to manage the condition of others, let alone their own condition.

Because I don't think you would leave us (and by us I mean humans) alone. Hence why the strict minimum of North Korea strong borders and armed neutrality is required.

What do you mean by "leave [humans] alone?" Like, we're not permitted to interact at all? To evangelize alternative ways of being? Are humans permitted to do the opposite? To decry why us not-humans are inferior and no one should be like us?

I guess I don't understand what the source of the standard that it is appropriate to return humans to is in a more atheistic framework. I understand the logic of restoring people to Be the way God intended. What is the substitute for God in terms of determining what state it is appropriate to return humans to?

CC @IGI-111

I guess being neither Catholic nor religious I don't find arguments about humans being a certain way relative to God's intention to be very convincing.

Or let's divide the territory at least. Since you're the transhumanist, can't you go live on Mars, or something? It would be a lot easier for you than for me.

I don't see any reason why peaceful coexistence isn't possible.

Two points I guess.

First, can I get some theory or principle for when people are obliged to accept the limits of their biology and when they aren't? I'm assuming your ok with humans ignoring the limits of their biology when it means not going blind, or letting deaf people hear, or crippled people walk. If I'm correct about the above why are LGBT people obliged to respect the "limits of [their] biology" with respect to having children but the others aren't for their conditions?

Second, why care specifically about being "human"? Whatever that means to you. I see downthread you complain about playing the definition game so I'll sidestep that and say that if becoming a "cross-over between Umgah Blobbies and the Borg" leads people to live longer, happier lives of the kind they want to have I think that's good, whether or not you (or anyone) would call the resulting entities "human."

There has to be more going on here than a random judge deciding that they are more qualified to decide technical medical questions than actual experts; as a general rule, political opponents aren't ever this insane. What are the details I'm not understanding in the decision that make this more reasonable?

This is a commendable attitude but in this case it is leading you astray. Kacsmaryk granted relief to Plaintiff's who (1) lack standing and (2) even if they had standing their claims are statutorily time-barred and (3) even if they weren't time barred are barred by lack of exhaustion of administrative remedies. You don't have to take my word for it either. Indeed, the lead plaintiff in this case (the Alliance for Hippocratic Medicine) was formed three months after Dobbs was decided last year and incorporated in Amarillo Texas specifically so they could file this suit and be assured that Matthew Kacsmaryk is the judge who would hear it.

You can't say that Hillary should have been jailed and then straight away imply that the Dems were right to think that trying to jail her was fascist. Do you not see your own contradiction here?

Alternatively: Democrats are concerned about fascism with respect to Trump for reasons other than his desire to lock up Hillary Clinton.

I am not sure there is any entity I (or people more generally) would (or ought) trust with that power, for what I think are pretty good reasons.

And why not select the judges and prosecutors based entirely on their wisdom to make good judgements, rather than their ability to manipulate a stupidly complex legal system?

How do you determine "wisdom to make good judgements" in advance? What if people disagree about what good judgements are?

Why should I need to label how my money (or my businesses' money) is spent for the administrative state?

Because you want the tax benefits that flow from labeling it a certain way?

Like, if you want to pay taxes on all the revenue your company earns or all your personal income or whatever then you don't have to care about how your money is labelled. But the reason people label things as business expenses is because the government gives certain kinds of tax advantage for those expenditures. The government is, I think understandably, upset when people lie to them and claim expenditures were for things that give tax benefits when they actually were not.

Courts will have to focus on spirit of the law. Where people that don't violate a single law might still get prosecuted, because they so obviously violated the spirit. Or where people that broke a million tiny elements of the law get off completely free, because they weren't doing anything that actually violated the purpose of the laws.

I don't understand how you can possibly think a legal system that operated this way would be perceived as more just than the current system. "We're going to throw you in jail, not because you broke any law but because fuck you." "Yea, you broke a bunch of laws other people are in jail for, but we aren't gonna punish you because we like you." Very just!

I confess I do not have a great idea of how to answer this. I'm not exactly sure qualia or consciousness are requirements for understanding either.

To be clear, I think LLMs can do a lot of really impressive things. I've used Github Copilot in my job and it was able to autocomplete some mostly correct code (variable/property names needed fixing) just from my writing a comment. It was pretty cool! But the leap from Copilot or GPT-4 or whatever to "We need international regulation on GPU production and monitoring for GPUs and to air strike countries that look like they have too many GPUs" is absurd.

Good enough next-token prediction is, in principle, powerful enough to do anything you could ask someone to do using only a computer.

I guess with the caveats "good enough" and "in principle" I am not sure I disagree but I am also not sure any LLM will be "good enough."