site banner

Friday Fun Thread for March 24, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

5
Jump in the discussion.

No email address required.

What's your definition of AGI? The label feels more like a vibe than anything else.

For me, the multi-modal capabilities of GPT-4 and others [1][2] start to push it over the edge.

One possible threshold are Bongard problems[3]. A year ago I thought, while GPT-3 was very impressive, we were still a long way from AI solving a puzzle like this (what rule defines the two groups?) [4]. But now it seems GPT-4 has a good shot, and if not 4, then perhaps 4.5. As far as I know, no one has actually tried this yet.

So what other vibe checks are there? Wikipedia offers some ideas[5]

  • Turing test - GPT3 passes this IMO

  • Coffee test - can it enter an unknown house and make a coffee? Palm-E[1] is getting there

  • Student test - can it pass classes and get a degree? Yes, if the GPT-4 paper is to believed

Yes, current models can't really 'learn' after training, they can't see outside their context window, they have no memory... but these issues don't seem to be holding them back.

Maybe you want your AGIs to have 'agency' or 'conciousness'? I'd prefer mine didn't, for safety reasons, but would guess you could simulate it by continuously/recursively prompting GPT to generate a train of thought.

[1] https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html

[2] https://arxiv.org/pdf/2302.14045.pdf

[3] https://metarationality.com/bongard-meta-rationality

[4] https://metarationality.com/images/metarationality/bp199.gif

[5] https://en.wikipedia.org/wiki/Artificial_general_intelligence#Tests_for_testing_human-level_AGI

Defining AGI would mean defining intelligence, which I can't do.

For my purposes, AGI is when you can put multiple humans and a chatbot in an IRC channel, offer a cash reward to identify the chatbot, and the humans do not accuse the actual chatbot at a disproportionate rate.

GPT4 passes the Turing test only if the human isn't examining it all that closely.