@Glassnoser's banner p

Glassnoser


				

				

				
0 followers   follows 0 users  
joined 2022 October 30 03:04:38 UTC

				

User ID: 1765

Glassnoser


				
				
				

				
0 followers   follows 0 users   joined 2022 October 30 03:04:38 UTC

					

No bio...


					

User ID: 1765

I've never seen that.

Right. The debate isn't really about our survival. Like you said, we'll all die unless AI saves us. The debate is really about our descendants. Do we want human descendants or computer descendants. If the fear is that AI is going to kill us this century, then I get why people prefer the human descendants, but this preference makes less and less sense the farther out this showdown is likely to occur. (I think it's likely very far out). Our biological descendants will only get more different from us and our computer descendants could take a number of different forms, so it gets harder and harder to see why we should care what happens in a far away basically unpredictable future where nothing recognizably human exists anyway. And any argument that the AIs have to win based on selection has to recognize that the same selective forces act on humans too. They should converge on the same thing in the long run.

A lot of the effective altruist types seem to be saying we should all stay home instead of enjoying the party because there is a small chance the punch is poisoned. I'm willing to take that risk. Staying home sucks and the party looks way more fun.

Compared to an amoeba, I'm a God and so are my adversaries. Actually, I don't really have adversaries. I live in a pretty functional world with eight billion Gods (relatively speaking) and I'm still here. They haven't killed me. What is qualitatively different about the world where our powers are scaled up by the amount AI will allow?

The pro-regulation argument depends on the highly unlikely belief that AI will soon reach a point where we cannot control it. Alignment, I strongly believe, is a complete non-issue. The problem is entirely about control. I think our experience with LLMs shows that alignment is actually pretty easy. The problem will not be AI that we can't get to understand exactly what we mean when we ask it to achieve some goal. The problem will be people deliberately designing AI to do bad things. The question of whether AI destroys us in the short to medium term will depend only on whether we can stop it. Only if AI makes destruction vastly easier than protection will it pose an existential risk.

In the long run, the risk is greater because destructive AI may gradually outcompete us. Natural selection might gradually select for AI that does not value humans. However, this is likely to be extremely slow because its speed will not be a function of how good the AI is but how much selection there is at the civilizational, and I think it's currently about zero and is slowing down. Without war, it doesn't really exist.

The biggest risk is probably that we give the AI the vote and then it votes to exterminate us, but that still requires a long period of likely slow selection and a whole series of other unlikely things that need to go wrong.

I won't say the very long run risk is negligible, it may even be high, but really, the problem is we just can't predict the future that far out. We'll have lots of time to figure this out. There will be a long period where we have extremely advanced AI but are still in control. They will be the time to figure out what to do about it and if we can stop AI from killing is now with smart regulation, we'll certainly be able to do so in the future.

The other thing those arguing for regulation don't understand is that regulation almost never works. The only thing it does reliably is to grind innovation and progress to a halt. AI is one of the few areas of technology that is progressing and it's in large part because of the lack of regulation. What regulation that has been rushed out so far has only proven this more concretely by banning many important uses of the technology and raising unnecessary barriers to entry. There is very little that is likely to reduce existential risk beyond the general stifling of the technology.

I don't just say this because the real risk of AI almost certainly comes from it taking over another country which then invades us, but because even the scenario commonly envisioned by decelerationists is one where we cannot align it, and therefore, requiring training runs to be approved by the government and for standardized safety protocols to be followed has basically no chance of ensuring alignment.

The most likely medium term existential risk I can see is that some kind of symbiosis occurs resulting in an AI industrial complex that takes over the government. Regulation is itself our greatest existential risk. The problem of government alignment is our greatest civilizational threat, not AI.

The actual focus of regulators has been all along and will remain fighting minor perceived social problems that they think AI will exacerbate, like racism, involuntary nudity, defamation, misinformation, job loss, and every form of discrimination justified or not. The purpose is to resist change, not to avoid catastrophe. But stopping the few good kinds of change in a sclerotic, degenerating civilization is setting up a catastrophe of its own. Putting the final nail in the coffin of technological progress means that the problems of stagnation, low fertility, dysgenics, environmental destruction, regulatory burden, and organizational rot will continue.

Why do people prefer real estate agents in their network? Is this rational? Why don't real estate agents compete on price instead?

It seems that most of the work that real estate agents do is finding clients. How does that work? If I want to sell my house, it's not hard to find a real estate agent. I can contact one in two minutes. For real estate agents to be spending so much effort finding clients, there has to be a large pool of people who want to sell their houses but for some reason don't have real estate agents yet. How can that be? What are real estate agents actually doing?

I've visited factories and they don't seem like places I'd want to work. They do the same tasks over and over. It seems like hell.

I don't buy the idea that there was a life script. At least in my own family, my impression is people used to move around and change careers a lot more. They got married earlier, but not necessarily right out of high school.

My maternal grandparents got married in the 50s. My grandmother had a few different jobs and lived in a few different places before getting married in her mid 20s. My grandfather joined the Navy after university. After the war, he tried a few different businesses which didn't work out. He married the secretary of someone he did business with and ultimately had to move from his small town to a big city in order to work as a surveyor.

My paternal grandfather became a reporter after graduating from law school and then moved with his brother from their small city to a bigger city. The brother had first worked as a farmhand on the other side of the country and then later went to graduate school. My grandfather moved to another city where he met his wife while covering an event she was attending, and then they moved to another city, bought a newspaper which he and his wife ran for a while, then started running a new insurance company and then moved yet again back to the big city.

The only family member I have that worked in anything like a factory was my great grandfather who was trained as a bookkeeper but worked in a meat packing factory during the depression. But this was a horrible job that was wrecking his body, so he became a farmer.

When I hear about their life stories, I see people trying out several different careers paths, trying out several different places to live, trying to figure out who to marry, and for many women, deciding whether to sacrifice their careers by getting married.

I don't think that's what the legislation refers to. It's broader than that.

There is nothing in the list above that I think shouldn't be allowed. Banning social scoring, in particular, is especially problematic.

I'm saying it would discourage transparency, because that would make it easier to do things which this law makes illegal. And no, if developers are made liable for things for which they should not be liable, it is better if they are able to hide what they're doing.

That would amount to a ban on general purpose AI.

The European Union has passed the Artificial Intelligence Act

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

...

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

...

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

This is an extremely restrictive law that will really hold the EU back economically if AI becomes an important technology. It imposes huge burdens on all uses, for both the users and developers, and outright bans many very useful applications.

The law tries to mandate transparency, while at the same time discouraging it by restricting or banning certain uses. An AI specifically made for social scoring, for example, would be illegal, while a general purpose AI would almost certainly do something like social scoring internally as part of a more general ability. For example, if you have an AI run a company in its head, so to speak, how would anyone know what it is doing? How would you know how it is selecting job applicants? It would be a black box and current attempts to figure out how large language models actually work would be the only way to find out what they're doing. But continuing that line of research would expose the developers and users of these systems to liability.

The fines are also enormous.

Fines for non-compliance can be up to 35 million Euros or 7% of worldwide annual turnover.

This would be devastating for a small or low margin business. Many are just not going to do use this extremely valuable technology. Lots of online services are just not going to be available in the EU. In fact, this is already the case with Gemini and Claude, probably because of privacy laws.

I've long argued that the AI safety movement is unlikely to do anything for existential risk and will, if anything, increase it, while assuredly greatly limiting the benefits, and this is strong evidence that I'm right. The regulatory state does not have the capacity to deal with existential risk from AI, whereas it has a long history of stifling technological development.

Or course it is. They need the support of either the NDP or the Conservatives to pass any legislation. There is also legislation only they oppose, and they're powerless to stop it.

This is already starting to happen on other issues. There was the famous link tax, where the Liberals passed a law requiring websites that link to news articles to pay for the privilege of doing so. Facebook just decided to block Canadian news articles from being posted.

Trudeau is not known for being very bright though. I mean this seriously. I know people who know him and they've said this. It's not just based on the many dumb things he's said and done publicly.

In addition, it empowers Human Rights Tribunals to investigate complaints by individuals against other individuals and levee fines of up to $20,000.

It should be emphasized that Humans Rights Tribunals are not normal courts. For example, they don't have to follow any particular set of rules. It's up to the judges. The standard of proof is a balance of probabilities, not beyond all reasonable doubt. Defendants don't have the right to know who their accuser is. Anyone (not just the alleged victim) can file a complaint, get their legal fees paid for, and get a portion of the award if the defendant is found guilty.

See this article. Note that this article is about section 13 of the Human Rights Act, which was repealed in 2013, but which this bill would bring back. The best part of the short article is the sample of cases at the end.

One example:

In 1999, a Christian printer was fined $5,000 for refusing to print a series of pro-pedophilia essays. He spent $40,000 in legal fees trying to defend himself.

The Liberals don't have a majority.

What is the best evidence on prostate cancer screening? Should it be done? Is it worth getting a biopsy or treating prostate cancer?