site banner

Small-Scale Question Sunday for April 23, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

This was skipped over in the main thread: Sam Altman said "We're not training GPT-5 right now, and won't be for some time... I think we're at the end of the era where it's going to be these, like, giant, giant models. We'll make them better in other ways."

Thoughts? Does this have any impact on your AI timelines?

From what I've read, Altman is simply de-emphasizing scale, due to options such as training on more tokens for similarly sized models, since deployment and inference costs would be lower over the longterm as compared to training a larger one on fewer tokens.

As for how that changes my timelines, we already live in a world where the damn AI is a better doctor than me and gets a B on quantum mechanics exams.

I was idly using GPT4 today and it just hit me full force that, holy fucking shit Jesus fuck me I'm having a full-blown conversation with an alien sapience, one smarter than the average person, and fully capable of understanding nuance and performing genuine intellectual labor.

That shook me, I realized that I'd been succumbing to complacency, when I'm already living the damn future. As a child or teen, I'd have wagered something like 2040 as the date when I could be having a chat with a human-level AI, and we're not even halfway through the 20s..

As such, even with Altman's statement, I doubt progress will stall, and even if it slows down, we're pretty much at the part that everyone once believed would take a ton of sweat, blood and erudite white papers. Not simply "stAck Moar layers!!!"

So my currently timelines are still 50% by 2028 and 90% by 2040, at most they've wobbled back six months.

Yeah, the part that really got to me was that it can be conversant on literally any topic, even if it might be outdated in knowledge or eventually refuse to answer about certain topics. And generally speaking it knows much more than I do on any of said topics.

So the pedantic philosophical question that comes to mind, for me, is whether you can really be said to be having a 'conversation' with an entity that already knows anything important you might tell it, and can answer any of your questions easily, whilst having no need to learn anything from or about you?

It becomes a wholly one-sided 'discussion' because the AI will never ask you questions about things it needs to know, and the chances of you having information it might find useful to add to it's corpus is vanishingly small. Can you have a 'dialogue' with an entity that understands any topic you might pick more comprehensively than you do?

So I end up feeling like a toddler talking to his parents and asking various questions about the world, and having absolutely nothing to offer them in return.

You can think of it as a really talented polymath with some damage to their short-term memory to longterm memory consolidation process. Would talking to such a human count as a conversation? I don't see why not!

As it stands, this is a limitation of current architecture, I'm sure we'll eventually end up with systems that learn/train on the fly, or have access to so much memory that it effectively remembers all your conversations.

(GPT 4 has asked me questions, usually to clarify insufficiently specific orders, unless I'm having it quiz me on something. There would be little point in having it ask you things unprompted, which is why it was never trainee to do so!)

There would be little point in having it ask you things unprompted, which is why it was never trainee to do so!

But that's an element of 'human' conversations, I think! "Prompting" the conversation party with a tangentially related topic which they wish to discuss further even if the other side hasn't expressed direct interest in it. The so called 'picking your brain' aspect.

To narrow my point a bit, at least one objective of conversation is for each party to have some greater understanding of a given topic out of it. If a professor and student have a conversation, for instance, the professor at least wants to gain a better understanding of the Student's grasp of the subject of the course and maybe adapt their own approach to teaching it, even if the student has nothing to add to their knowledge of the subject.

That is NOT what it feels like is happening when you interact with the GPTs.

Hence, my comparison to feeling like a toddler. "What are clouds made of? How do trees grow?" even though my questions are comparatively sophisticated, it still feels like I'm not able to hold up "my end" of the conversation with the other party.