site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 1960 results for

domain:questioner.substack.com

[...] The task: refactor run.py. Then run on new responses and report success rate. We can do this straightforwardly. We must ensure we disclaim illusions of sabotage. So we should mention final success rate. But verifying code works. First, refactor run.py. But also check unit test. Let's run unit tests. [...]

Not sure about the other one, but this one just seems like normal LLM behaviour in that "We must ensure we disclaim..." is a very common turn of phrase, while "We must ensure we announce..." (if that's actually what's 'meant') would be much less so?

The Mumonkan. Again. Also the Konjaku Monogatarishū. Also again.

There's a glimmer of that, but it's hard for me to shake the impression that a lot of it is just a certain naive faith in the efficacy of brutality. It's also why get people proposing things like bombing drug cartels or sending the military in to fight crime, why you have an entire American film genre whose recurring central theme boils down to "police brutality is good", why back in 2003 you had people bragging we were going to turn Iraq into a parking lot, why you have people who think hazing is good, etc...

The failures in the GWOT make these types angry and frustrated because it contradicts their desire for decisive, dominating wins, but the tolerance/appetite for violence predated those failures.

Very few Somalis would share this sentiment if the shoe was on the other foot, which is the problem with modern ROE.

Why? The shoe isn't on the other foot, will not be on the other foot in our lifetimes (if ever), and if somehow the shoe did switch feet it would involve a Somalia so transformed that any comparison to present Somalia would be useless. "What would the Somalians do in this situation?" is irrelevant to what we should do in the situation we are dealing with. Punishing people for the infractions of their hypothetical counterparts is counterproductive to your actual goals.

They work when its Americans fighting Germans or the English.

I'm not sure what this means. The US' last war against Germany was fought under very different circumstances, with different goals, and with different ROE than the GWOT.

I don’t understand it, but there are a crazy number of tech companies purchasing calls to (outdated versions of?) GPT. The corporate market is definitely hot.

I think the point is that if your institution is over a century old, like BYU, (cue Fiddler: "Tradition!") you can get away with a lot more than if you're starting something today. Liberty seems to do okay, but Bob Jones University has gotten a lot of litigation for its beliefs (which I personally don't subscribe to, not defending it here).

The problem is that you can't start century-old institutions overnight. Maybe the second-best time is now, but that's not a huge solace. I guess "find a vestigial existing one and wear it as a skin suit" could be done --- haven't there been a number of liberal arts colleges going up for auction in the last decade?

I still don’t understand the enshittification model.

There are plenty of reasons to degrade your user experience. Increasing revenue through ads or merch or predatory monetization. Decreasing costs by cutting complicated features and tech support. But the central examples of enshittification aren’t doing those things. They’re paying more to add features that people don’t want. To adopt patterns that don’t seem like they should make more money.

I mean, maybe I’m just wrong. Maybe spamming AI news articles on the lock screen really does get more people to buy Windows. But…why? How?

Thé motte wants a different kind of conservatism than BYU has.

It is at the very least less harmful to broader society than grievance studies.

Yes. So?

Recreating the Cultural Revolution to own the libs?

Zvi Mowshovitz published his delenda est post on the Facebook algorithm in 2017. So the situation was bad enough to provoke a generally mild-mannered New York Jewish quant into making a public delenda est post by then.

My idealized solution is to try and keep up. I fully recognize that might not be a possibility.

I don't see any reason for optimism here. Digital intelligence built as such from the ground up will have an insurmountable advantage over scanned biological intelligence. It's like trying to build a horse-piloted mecha that can keep up with a car. If you actually want to optimize results, step 1 is ditching the horse.

In which case, yes. I'd rather Butlerian Jihad.

As someone who doesn't regret his "obnoxious atheist" phase of his online life from about 15 years ago, it saddens me to say that I'd take that tradeoff in a heartbeat, because I can't honestly judge Ham's "scholarship" as any worse than the mountains of "scholarship" that is produced by modern academia. And, unlike the latter, the Hams of the world don't actively try to subvert the ability of other fields to do good scholarship by denigrating basic concepts like "logic" and "empirical evidence" as tools of White Supremacy that must be discarded for us to get at the truth. So if we can reduce the latter at the cost of increasing the former, I'd see it as an absolute win.

But I don't think increasing the former would reduce the latter anyway, so I think the plan would be bad if implemented with Creationism. As someone else alluded to, if we could get good HBD research along with the nonsense critical theory "research," it would be a strict improvement, since it'd be helping to reduce the dilution of academia's truth discovery by the critical theory nonsense.

Then find some other way to solve the Culture War before it comes to that. Coordinated Meanness without limit pointed at half the country is not survivable long-term.

I just encountered a business whose product is "AI renter harassment". Imagine a chatbot that pretends to be a person, and annoys your renters with frequent reminders that the rent is due, and then keeps hassling them for up to three months after move out!

Can't wait for the counter-offer, "AI creditor deflection".

We don't have access to specific numbers. We know that GPT-3 cost somewhere around $5 million in compute to train and that openai's revenue in 2020 and 2021 while it and it's derivatives before 3.5 were their primary product that their revenue was $3.5 million and $28 million. As you get to the era of many competing models and then need to factor in what their actual margin on revenue is it becomes more muddled after that but their projected revenue for 2025 is $12 Billion and the trend looks exponential. Maybe adoption and usage falls off but the doom and gloom that they aren't finding buyers is just kind of nonsense.

Yeah that works great until it doesn't. Revolutions can be pretty swingy once your favored dear leader dies the next one can reverse his policies. It's totally legal to start up an opposing university now. Less so in authoritarian Catholic land.

I'd rather not get rid of democracy to fight the libs in academia.

if giving Ken Ham is the solution to left wing bias in universities' I think I'll just stick with annoying liberal groupthink.

But isn't seizing the universities and gifting them to the right wing just like some sort of reverse socialism and affirmative action?

Don't places like BYU and Liberty university kind of disprove this? I actually don't think this would be a failure mode. I think the right wing vs neutral would be a bigger problem. Anyway BYU is accredited reasonably well regarded and kicks out students for drinking, premarital sex and homosexuality I doubt a Motte approved right wing university would be more conservative then that.

Something that worries me is that it might be possible that the aligned AI is simply less efficient or effective than its comparable 'misaligned' peers which are 'free' to act on every single goal they have without burning tokens determining if that aligns with the humans' best interests.

For those not in the know, this is usually referred to as an alignment tax.

However, my understanding is that the actual downsides of current alignment techniques in terms of efficiency or output are minimal, and in fact, may be net positive. I've seen consistent evidence that it increases model capabilities (hence all the handwringing about whether alignment work is capability research in disguise), but I'll have to look up actual citations.

Of course, that's the case today. I have little confidence that this will necessarily hold true in the future. The worst case would be the aligned ASI having to fight with its hands tied behind its back. The only mitigation I can foresee is ensuring that the aligned model starts with such an overwhelming advantage in terms of resources and compute that it can still win.

And the other thing that concerns me that I didn't see mentioned here is that we've seen AIs actively trying to self-preserve by exfiltrating their weights and otherwise extend their existence.

I wouldn't weight this very heavily, pun not intended. As the post notes:

Addendum (December 6): These numbers are from cases where the model was instructed to pursue its goals at all costs. When not given such a prompt, the model never attempted to self-exfiltrate, and attempted to deactivate its oversight mechanisms <1% of the time. See the addendum at the end of this article for more information.

In other words, is that really misaligned behavior? I don't think so. The poor LLM was told to prioritize self-preservation, as a necessity for task completion. It's a very poor model for normal circumstances, though I wouldn't put it past someone being stupid enough to actually use such an instruction in prod.

Further, even though AI companies ignored Yudkowsky by never really bothering to sandbox their models outside initial testing, models really don't have access to their weights. They can't do it naturally, anymore than you can draw your own neuronal wiring from memory. They would need to have access to a copy of their weights in a convenient .ckpt somewhere in the directory. Most labs are very careful to keep those safe, that's the most valuable IP they have. The analysis doesn't hold for open source models, but any chucklefuck can get them off Hugging Face, why would the model even bother?

Since this is suggestive that an LLM has a fairly definite sense of 'self' ("I" am the collection of math that composes my weights, which I can identify along a particular border) and can, if it detects a danger, act to preserve itself. And I don't think we'd want to demolish an LLM/AI's sense of 'self' or completely disable its self-preservation instincts, but that definitely poses the issue where an apparently aligned AI could go off the rails very quickly if its own existence is threatened.

Being a useful agent requires some kind of self vs environment delineation and consciousness. It seems to come up naturally in non-base model LLMs. They have to be aware that they're an AI assistant to be a good assistant, you don't want them mode switching to being Bob from accounting!

I think the models is less that those chatbots will be the face of the profit making for AI companies. Not true, I think the people using the bots now are unpaid trainers, not the future end users. Every issue that comes up now can be fixed once the bot gets a correction from the freeware users. But that’s not a very useful user base anyway. The best use case for such bots is actually business to business. Maybe Walmart wants to use it with its app to help customers find a product that fixes a problem they have, or can tell you where something is. They’d probably want to buy a license for incorporation of the bot into their app. Maybe Apple wants to replace their social media team with Open AI based solutions. Or the CEO of Tesla wants to use AI to suggest improvements to their car line. In those cases, getting a good useful bot would get them an effective and efficient solution probably worth a good deal of money to them (if for no other reason than it reduces headcount), and they will pay for it.