Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
Someone from the deep AI theorists here - is it possible to distill censored model into uncensored? What I mean by that - if a model is reluctant to go deep into the fun parts of the highly energetic nitrochemistry - just by asking other unrelated benign chemistry questions to figure out the forbidden knowledge? Not by tricking or something. Just normal use.
I don't know the theory but on general principle, if the model knows something (like, how to make things go boom), it'd be very, very hard to make it never reveal it to anybody. Systems with much less degrees of freedom and much stronger guarantees are regularly broken, even though we understand the theory of how to build such systems completely - we just suck at implementing it, because of practical considerations. As I understand it, AI security has a lot of "kick the black box until it looks secure" kind of thing going on, and I am almost sure that's a weaker security model than what we previously had, and thus will have more exploits.
This seems like a sort of "security through empiricism" versus "security through principles" dichotomy, (or perhaps "science" versus "engineering") and it'll be interesting to see empirically which one will be better in the next few years. Intuitively, I agree with you, but also, there's no way to tell until it actually plays out in the real world with real AI-aided/AI-agent-based crackers trying to break into systems. And there will likely never be any truly good apples-to-apples points of comparison between these methods.
In the far future (in AI timescales, that's least 2 years away), I expect just telling the AI "make this system secure from X, Y, Z, and also anything else you think I might want based on the list I already gave you" will be far more secure than engineering a system where you know each and every if-then clause and for loop and etc. and how they work together. At the very least, an AI managing the system seems more likely to be able to adapt to new and newly invented exploits as they come up, since it'd be available 24/7 and not be limited by physical limitations of the bandwidth that a human has to instruct a computer. Though once Neuralink and similar devices become as cheap, common, and non-invasive as a laptop, that advantage could go away.
AIs are already very good at some things that humans suck at, like running an enormous checklists, repeatedly, over and over, and verify each time each checkbox is checked (computers have always been good at that, just now AIs allow the checklists to be much more complex than before). Humans are notoriously bad at this, and a lot of security bugs come from it. Like, did you check that every single place you accept outside input is safe from 200 ways outside input could mess you up? Normal humans mess up stuff like that all the time, but for AIs that would become easier and easier - both finding bugs like that and ensuring bugs like that don't happen. However, current LLMs/AIs are vulnerable to other modes of failure. If you write a standard security system, there's no way to convince it to let you in with a wrong password just this once, because you are exploring an alternative universe where this password is correct. Most "classic" systems are just too dumb to allow something like that. But "generic LLM" can very well be vulnerable to that. So I expect AIs would be a great help with eliminating old-style exploits, as well as finding new ones (it's the same thing really, just wearing different color of hat, black or white) - but also have their own classes of exploit we've never seen before. Like adversarial attacks on ML algorithms, completely invisible to humans. Imagine sending an email to some company, which for reason unknown to anybody makes corporate AI send you a big fat check. The email itself doesn't say anything about anything like that, just for some reason it looks to the AI like an approved accounts payable invoice.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link