site banner

Friday Fun Thread for March 24, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

5
Jump in the discussion.

No email address required.

What's your definition of AGI? The label feels more like a vibe than anything else.

For me, the multi-modal capabilities of GPT-4 and others [1][2] start to push it over the edge.

One possible threshold are Bongard problems[3]. A year ago I thought, while GPT-3 was very impressive, we were still a long way from AI solving a puzzle like this (what rule defines the two groups?) [4]. But now it seems GPT-4 has a good shot, and if not 4, then perhaps 4.5. As far as I know, no one has actually tried this yet.

So what other vibe checks are there? Wikipedia offers some ideas[5]

  • Turing test - GPT3 passes this IMO

  • Coffee test - can it enter an unknown house and make a coffee? Palm-E[1] is getting there

  • Student test - can it pass classes and get a degree? Yes, if the GPT-4 paper is to believed

Yes, current models can't really 'learn' after training, they can't see outside their context window, they have no memory... but these issues don't seem to be holding them back.

Maybe you want your AGIs to have 'agency' or 'conciousness'? I'd prefer mine didn't, for safety reasons, but would guess you could simulate it by continuously/recursively prompting GPT to generate a train of thought.

[1] https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html

[2] https://arxiv.org/pdf/2302.14045.pdf

[3] https://metarationality.com/bongard-meta-rationality

[4] https://metarationality.com/images/metarationality/bp199.gif

[5] https://en.wikipedia.org/wiki/Artificial_general_intelligence#Tests_for_testing_human-level_AGI

I think AGI is not a hard thing you can define precisely like you can atomic elements, or the number 5. Like most things the definition is blurry around the edges. To me it'd be AGI when it can start behaving like a human. So I suppose when it's able to continuously interact with the world in a sensible way without repeated prompting.

Defining AGI would mean defining intelligence, which I can't do.

For my purposes, AGI is when you can put multiple humans and a chatbot in an IRC channel, offer a cash reward to identify the chatbot, and the humans do not accuse the actual chatbot at a disproportionate rate.

GPT4 passes the Turing test only if the human isn't examining it all that closely.

The vast majority of humans couldn't replace a single OpenAI employee, let alone all of them. I think your standard for intelligence is too high.

OP's question is about what you consider AGI. I consider it general intelligence, like that it can do a very wide variety of basic tasks and easily learn how to do new things. A human child once they're 3-5 years old is a general intelligence in my opinion. But yeah the exact definition is all in the eye of the beholder.

If AGI signals the singularity, and the singularity is the moment when AI starts improving itself recursively, then the definition of AGI surely involves a self improvement mechanism capable of exceeding human potential.

I see your point but I think @non_radical_centrist has one, too. Let's say we develop an AI that perfectly emulates a 70 IQ human named LLM-BIFF. That's general intelligence. Set all super-computers on earth to run LLM-BIFF. Does LLM-BIFF recursively self-improve itself to become LLM-SHODAN?

There must be a narrow window of AI sophistication in which we have a generally intelligent program, but nevertheless one not intelligent enough to bootstrap itself and trigger a singularity. Whether this window lasts one iteration of AI development or much longer is the question.