Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
Someone from the deep AI theorists here - is it possible to distill censored model into uncensored? What I mean by that - if a model is reluctant to go deep into the fun parts of the highly energetic nitrochemistry - just by asking other unrelated benign chemistry questions to figure out the forbidden knowledge? Not by tricking or something. Just normal use.
I don't know the theory but on general principle, if the model knows something (like, how to make things go boom), it'd be very, very hard to make it never reveal it to anybody. Systems with much less degrees of freedom and much stronger guarantees are regularly broken, even though we understand the theory of how to build such systems completely - we just suck at implementing it, because of practical considerations. As I understand it, AI security has a lot of "kick the black box until it looks secure" kind of thing going on, and I am almost sure that's a weaker security model than what we previously had, and thus will have more exploits.
This seems like a sort of "security through empiricism" versus "security through principles" dichotomy, (or perhaps "science" versus "engineering") and it'll be interesting to see empirically which one will be better in the next few years. Intuitively, I agree with you, but also, there's no way to tell until it actually plays out in the real world with real AI-aided/AI-agent-based crackers trying to break into systems. And there will likely never be any truly good apples-to-apples points of comparison between these methods.
In the far future (in AI timescales, that's least 2 years away), I expect just telling the AI "make this system secure from X, Y, Z, and also anything else you think I might want based on the list I already gave you" will be far more secure than engineering a system where you know each and every if-then clause and for loop and etc. and how they work together. At the very least, an AI managing the system seems more likely to be able to adapt to new and newly invented exploits as they come up, since it'd be available 24/7 and not be limited by physical limitations of the bandwidth that a human has to instruct a computer. Though once Neuralink and similar devices become as cheap, common, and non-invasive as a laptop, that advantage could go away.
More options
Context Copy link
More options
Context Copy link
To some extent yes, you can always rederive things for yourself, in the same way that you could from a bunch of chemistry textbooks. In practice, censored topics create a kind of conceptual dead zone. In the same way that a model with the Golden Gate Bridge concept dialled up started finding ways to slip it into every conversation no matter how strained, censored models will smoothly elide anything conceptually close to naughtiness.
In practice, it’s easier to take an open source model and abliterate it by turning off the neurons that activate when you discuss censored topics.
More options
Context Copy link
More options
Context Copy link