site banner

Friday Fun Thread for March 31, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

2
Jump in the discussion.

No email address required.

Yes, I'm really glad to see someone else point this out! One thing that's interesting about LLMs is that there's literally no way for them to pause and consider anything - they do the same calculations and output words at exactly the same rate no matter how easy or hard a question you ask them. If a human is shown a math puzzle on a flashcard and is forced to respond immediately, the human generally wouldn't do well either. I do like the idea of training these models to have some "private" thoughts (which the devs would still be able to see, but which wouldn't count as output) so they can mull over a tough problem, just like how my inner monologue works.

You can kinda do this in chatGPT - ask a question as a chain-of-thought prompt, then a follow up asking it to extract the answer from the above.

Experimenting with giving ChatGPT-4 a more structured memory is easy enough to do that individuals are trying it out: https://youtube.com/watch?v=YXQ6OKSvzfc I find his estimate of AGI-in-18-months a little optimistic, but I can't completely rule out the possibility that the "hard part" of AGI is already present in these LLMs and the remainder is just giving them a few more cognitive tools. We're already so far down the rabbit hole.