site banner

Small-Scale Question Sunday for September 04, 2022

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

26
Jump in the discussion.

No email address required.

So, what are you reading?

I'm starting Minsky's Society of Mind, a classic AI text about building minds from smaller, mindless components. The current zeitgeist seems to be moving away from classic AI, but eventually we'll need better understanding of where precisely our machine-learning models fit in the broader scheme of knowledge. "Shut up and scale" doesn't seem entirely satisfactory. Maybe going over some slightly dust-covered ideas might spark some useful thinking. The book itself seems like a mix of aesthetic quirks and precision, and I seem to be in the mood for that.

Currently trying to work my way through 'Topology Without Tears'. I was working through a functional analysis book, but I found the proofs to be beyond my current capability. This topology book seems to have a smoother increase of challenge in the exercises, at least for the relatively early parts I've gotten through.

I think scaling is good enough for a lot of things we want AI to do, but I wouldn't be surprised if it starts having issues eventually. I think our current problem with most models at the moment is lack of control:

Such as generating an image with stable-diffusion and then doing slight modifications (different clothing, facial expressions, or backgrounds with the same person). This is possible through piecing together multiple models (I think people tend to use DALLE-2's outpainting, and maybe img2img?), but seems unsatisfactory and less powerful than it could be.

Text generation also seems to have similar issues, where NovelAI works surprisingly well for writing, but I've also had a lot of trouble with convincing the language-model that a character should have certain personality/behavior constraints. This also means NovelAI would struggle for doing something like dynamically generating a choose your own adventure story (where you can type in arbitrary things), since you can't get consistent constraints on character behavior or the setting they're in.

I think AI-safety would probably benefit from a designed AI 'core' which uses weaker ML modules, and then hopefully you can prove things about it. Though this is mostly because I consider interpretability to probably not reach a point where it is good enough.