@FeepingCreature's banner p

FeepingCreature


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 00:42:25 UTC
Verified Email

				

User ID: 311

FeepingCreature


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 00:42:25 UTC

					

No bio...


					

User ID: 311

Verified Email

Out of interest, do you think that a mars base is sci-fi? It's been discussed in science fiction for a long time.

I think any predictions about the future that assume new technology are "science fiction" p much by definition of the genre, and will resemble it for the same reason: it's the same occupation. Sci-fi that isn't just space opera ie. "fantasy in space", is inherently just prognostication with plot. Note stuff like Star Trek predicting mobile phones, or Snowcrash predicting Google Earth: "if you could do it, you would, we just can't yet."

"At this rate of growth, the entire lake will be this algae in a few days". "Ludicrous silliness!"

The point is we don't have a clue where the sigmoid will level, and there doesn't seem to be a strong reason to think it'll level at the human norm considering how different AI as a technology is to brains. To be clear, I can see reasons why it'll level below the human norm; lithography is a very different technology from brains and it does sure look like the easily Moore-reachable performance for a desktop or even datacenter deployment will sigmoid out well below the human brain scale. But note how that explanation has nothing to do with human brains for reference, and if things go a bit different and Moore keeps grinding for a few more turns, or we find some way to sidestep the limits of lithography like a much cheaper fabrication process leading to very different kinds of deployment, or OpenAI decide to go all in on a dedicated megatraining run with a new continuous-learning approach that happens to work on first, second or third try (their deployed capacity is already around a human brain), then there's nothing stopping it from capping out well above human level.

I genuinely don't understand how you can say it's plausible to happen at all, but sci-fi nonsense to happen likely. By and large probability is in the mind, and "sci-fi" is usually a claim about the reality part of a belief rather than the opinion. It'd be like saying "It's possible that it happens soon, but it's raving sci-fi nonsense for you to be worried about it."

There was never a need for a "casus belli", Trump can just do things. He can just ignore the supreme court. What's supposed to keep the country working is not "presidents don't assume dictatorial powers" but "the other organs stop him". This will simply be an opportunity to discover if it works.

That's a good metric!

But it does still provide a non-tariff trade barrier (i.e. is a protectionist policy) against potentially more competitive imports.

I mean, I just feel like... every regulation, including fraud and food safety, is a protectionist policy against potentially more competitive imports, isn't it? If you have too much arsenic in the imported orange juice or whatever, then if the customer would usually not notice this immediately and correct course, then the restriction on selling the orange juice is, from a pure market perspective, a trade barrier against competition. I think at some line, and arguably "champagne from Australia" is across that line, you have to say "no, fraud is not legitimate competition actually."

I think the real answer here is "do not under any circumstances allow a party to win all three organs of government."

Rationalist here. Timeless decision theory was never explicitly designed for humans to use; it was always about "if we want to have AIs work properly, we'll need to somehow make them understand how to make decisions - which means we need to understand what's the mathematically correct way to make decisions. Hm, all the existing theories have rather glaring flaws and counterexamples that nobody seems to talk about."

That's why all the associated research stuff is about things like tiling, where AIs create successor AIs.

Of course, nowadays we teach AIs how to make decisions by plain reinforcement learning and prosaic reasoning, so this has all become rather pointless.

As I understand it, there may be genuine self-induced brain damage in play, so probably no.

Yeah I gave him a lot of credit, but the evidence he got online right brainfried is rapidly mounting.

Personally speaking I don't think about it because I believe AI kills us first.

To me it's not a matter of category but scale. And micro-secessionism does not affect the rest of the country, whereas fucking with the election does.

It had been previously established that it was entirely acceptable for mobs to declare themselves sovereign from local, state and federal law enforcement, and to enforce this claim by burning police stations and courthouses, denying access to the actual police, arming themselves with rifles and shooting people in the street.

I think there's still a big difference in kind between effectively micro-secessionism and fucking with the election. One is an attack on one area of a city, the other is an attack on the entire country.

The normal way you beat a network effect is by providing an alternative that is sufficiently compelling to a subset of high-value users that they jump first and bring the rest of the network with them later.

You know what? Make a good fandom wiki! The current sites are a confused, ad-riddled mess, getting worse all the time, and Wikipedia has explicitly kicked them off. They're up for picking.

Giving him 22 years for seditious conspiracy would make sense were he say, a National Guard colonel whose troops arrested the entire senate and occupied the building for days.

Okay, I honestly agree with the rest of the comment, but if there's anything that a state should have a death sentence for, surely it's that. Like, at that point it's not even a question of law and order but of naked self-preservation.

I guess the argument would be that life creates an incentive against killing the senate? Hard to say where the stand-off factor is there. This is not exactly a common occurrence, so maybe going all the way to maximal deterrence is fine actually.

However, nonetheless, even when you think somebody's position implies something you shouldn't say that they believe that thing. I think that's in the rules? And humans don't really work that way to start with.

It's not a tell. For instance, it's so common in, say, German, that even German Wikipedia does it. "The Ukraine is a state..."

Personally, I mostly read forum quests and fanfic on my phone. I don't think there's anything special about books in particular.

It's so weird to me, because it's like a minimum coup. Not even a minimum viable coup, because it clearly isn't. It's not doing your enemy a small injury, it's like slapping your enemy in the face with the broad of your sword, then running away. Are you trying to start shit or not? It's like they themselves didn't know if they wanted to start shit or not. Like a child's drawing of a coup: all the parts are there, the march, the violence, the fraudulent scheme, but they're just executed with zero skill or coherence, basically at random. I think that's why it causes so much division. It's like your neighboring country rolls a tank over the border, but it's made of cardboard, plops out one sad shell and falls apart. Now you don't even know if you're supposed to be at war.

It's a coup done by a person who just doesn't know how to do one. So do you let it count?

The only thing I ever saw Apple commenting on is the first one. Did they say it's technically unfeasible to build a surveilable phone anywhere? Cause that's dumb.

The disagreement is: having made a phone that works this way, it is now technically infeasible to search it. It was not technically infeasible to build the phone another way, but Apple also never claimed that. After all, they did this deliberately, as a sales pitch.

Was it Scott Alexander who back in the day wrote an essay about how liberal values are optimized for times of peace and abundance and conservative values are optimized for a zombie apocalypse scenario?

Yes.

I just had this comment in the "The Motte needs your help" report queue. Obviously it's in the wrong spot, but also I can't flag it as "this needs a moderator to move it maybe" because the report queue doesn't show context, and on its own this is a perfectly normal comment. Bit of a weakness, idk.

I mean, that kind of sounds like you're saying it's provably not a 1:1 simulation of a human brain.

What you're describing is measurable evidence of new physics. Every physicist in the world would want to buy you a beer.

I'd say do at least 3.5 Sonnet and whichever model of o1 is out by then. Sonnet is the best "classical" code llm (imo!), though you may have to prompt it pretty hard to get it to try a oneshot. But o1 is designed for oneshots and is the only one that may be a paradigm shift in ai design. It's been worse than sonnet at some tasks, but this may play to its strengths. Also if adding a Python interpreter, implore the models to add timeouts. :)