@SnapDragon's banner p

SnapDragon


				

				

				
0 followers   follows 0 users  
joined 2022 October 10 20:44:11 UTC
Verified Email

				

User ID: 1550

SnapDragon


				
				
				

				
0 followers   follows 0 users   joined 2022 October 10 20:44:11 UTC

					

No bio...


					

User ID: 1550

Verified Email

Zootopia too. There's some woke messaging, but the story is a hell of a lot of fun.

It was at least somewhat justified by the bullshit tech the aliens had. (Which very conveniently could completely control all scientific experiments but not, you know, actually KILL anyone.)

The real problem with The Dark Forest (spoiler alert) was the concept that all of humanity, working for more than a century on a problem with existential stakes, failed to come up with a theory that, uh, most people interested in cosmology already knew about in the 70s as a potential answer to the Fermi paradox. (Also, the deterrent threat at the end doesn't even really work because it would send a message out only in the plane of the ecliptic. Sigh. I wouldn't mind the bad science so much if it weren't wearing the skinsuit of Hard Sci-Fi.)

Some managers, sales reps, and HR workers come to mind (note that I'm not saying there's no need for those roles, but I get the impression there are far too many people in them). Heck, even many coders, despite having a real thing they make, are just skating by and not making a difference to anyone's life. I would possibly include myself in that. And I'm working for a successful company - I'm sure it's a dozen times worse in, say, the government, where even the distant hand of the market can't reach you.

I'm also open to the argument that 95% of jobs are useless but it's humanly impossible to know exactly which those are, so you need to keep everyone employed. I'm not arguing from omniscience here, just from my instincts after decades of code monkeying.

I'm in software too, and my productivity is boosted hugely by ChatGPT. However, there are caveats - I'm an experienced developer using an unfamiliar language (Rust), and my interactions consist of describing my problem, reading the code it generates, and then picking and choosing some of the ideas in the final code I write myself. My experience and judgement is not obsolete yet! If you just treat it as a personalized Stack Overflow, it's amazing.

On the other hand, in my personal time, I do use it to rapidly write one-off scripts for things like math problems and puzzles. If you don't need maintainable code, and the stakes aren't too high, it can be an extremely powerful tool that is much faster than any human. You can see the now-ruined Advent of Code leaderboards for evidence of that.

I don't find the statement so ridiculous, unfortunately. As @ThomasdelVasto and I posted before, the corporate market may be in an irrational but metastable state. Far too much of white-collar work is just "adult daycare", and society has been built around the idea that this is how you keep people occupied. It's possible that, at some point, the whole edifice will collapse. But hey, I don't have a bird's-eye view and I could be wrong. Let's hope so!

I hate dynamic programming, but it seems that you can't "jump ahead" when calculating prime numbers. This feels like computational irreducibility. The world in which this property exists, and the one in which it doesn't, are meaningfully different.

You can, actually. Testing whether a specific number is prime is actually pretty easy (disclaimer: there are subtleties here I won't go into), and doesn't require computing the numbers earlier than it. It's factoring a number which is apparently hard (although there are still much faster methods than iterating over the numbers before it). This is why RSA is practical: it's computationally very easy to search for 1000-digit prime numbers, but very hard to recover two of them after they've been multiplied together.

I think the rest of your questions veer more into spirituality, philosophy, and ethics than math, so I'm not sure I'm the right person to ask. I have all the spirituality of a wet fart. But I can tell you that the Collatz conjecture is not relevant when discussing the future of civilization. :)

Maths is incredibly productive on net.

Ergo, it is immensely sensible to subsidize or invest in maths as a whole. The expected value from doing so is positive. Our entire society and civilization runs on mathematical advancements.

I have no quibbles with these points! I think what you should take away is that the distribution of potential practicality is far from uniform. There are fields that we can be very, very, very sure aren't practical. If we were horribly utilitarian about things, we could easily, um, "optimize" academic math without losing out on any future scientific progress.

Also, lest my motivations be misunderstood, I'm happy that we fund pure math for its own sake. I took a degree in it. I love it. I just don't want it to be funded under false pretenses.

That's a good question. I'm not sure of the exact reason quaternions were invented - you can indeed stumble on them just by trying to extend the complex numbers in an abstract way - but the Wikipedia article suggests they were already being used for 3D mechanics within a couple of years of invention. (BTW, "number theory" involves integers, primes, that kind of thing, not quaternions. Complex numbers do show up though.)

You could ask the same question about complex numbers too, but they originally arose from the search for an algorithm to solve cubic equations, which is a fairly practical question. That they later turned out to be essential for electronics and quantum mechanics is a case of some new applications of an already useful math concept.

No, I'm sorry, but you really don't know what you're talking about here. The field of pure mathematics is much larger and stranger than you know, and it takes years of intensive study to even reach the frontier, let alone contribute to it. Conic sections and integral transforms are high-school or early university math, and knowing them makes you as much of a pure mathematician as knowing how to change your car's oil filter makes you a CERN engineer. (And, for the record, conic sections were certainly never useless - even other people in that thread you linked called out that ridiculous claim. And non-Euclidean geometry is useful in many other realms than special relativity, like, oh, say, navigating the Earth!)

While there is zero chance of any of the math I linked above being useful, I admit that cryptography isn't the only example of surprising post-hoc utility showing up. As theoretical physics has gotten more abstract (way way beyond relativity), some previously existing high-powered math has become relevant to it. (The Yang-Mills problem, another Millennium Problem, unites some advanced math and physics.) But I absolutely defy the claim that there is a "tendency" for practical applications to show up. Another way to frame the fact that 0.01% of pure math has surprised us by being useful over the last 2,000 years is... that we're right that it's useless 99.99% of the time. I wish I had that much certainty about the other topics we discuss here!

BTW, did you not realize that @walruz was joking? What he linked is a fun Magic: The Gathering construction. If the Twin Primes conjecture is true, then the loop never ends. If it's not true, it does end, after 10^10^10^10^whatever years. It may be slightly optimistic to describe that as "paying dividends"... (Also, the construction only exists because of a card that specifically refers to primes in its rules. You can't claim that math has practical application because it's used to answer trivia questions involving that same math!)

Hehe, I stand corrected!

Isn't it a massive meme (based in fact) that even the most pure and apparently useless theoretical mathematics ends up having practical utility?

Hell, it even has a name: "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"

Definitely not! The article you're referring to was about theoretical physics having surprising application to the real world, not pure math. The rabbit hole of pure math goes ridiculously deep, and only the surface layers are in any danger of accidentally becoming useful. Even most of number theory is safe - the Riemann Hypothesis might matter to cryptography (which is partly why it's a Millennium Problem), but to pick some accessible examples, the Goldbach Conjecture, Twin Primes conjecture, Collatz conjecture, etc. are never going to affect anyone's life in the tiniest way.

My career never went that way, so I've only dipped my head into the rabbit hole, but even I can rattle off many examples of fascinating yet utterly useless math results. Angels dancing on the head of a pin are more relevant to the real world than the Banach-Tarski paradox. The existence of the Monster group is amazing, but nobody who's not explicitly studying it will ever encounter it. Is there any conceivable use of the fact that the set of real numbers is uncountable? If and when BB(6) is found, will the world shake on its axis? Does the President need to be notified that Peano arithmetic is not a strong enough formal system to prove Goodstein's theorem?

I definitely don't have @self_made_human's endless energy for arguing here, but his takes tend to be quite grounded. He doesn't make wild predictions about what LLMs will do tomorrow, he talks about what he's actually doing with them today. I'm sure if we had more people from the Cult of Yud or AI 2027 or accelerationists here bloviating about fast takeoffs and imminent immortality, both he and I would be arguing against excessive AI hype.

But people who honestly understand the potential of LLMs should be full of hype. It's a brand-new, genuinely transformative technology! Would you have criticized Edison and Tesla at the 1893 World's Fair for being "full of hype" about the potential for electricity?

I really think laymen, who grew up with HAL, Skynet, and the Star Trek computer, don't have good intuition for what's easy and what's hard in AI, and just how fundamentally this has changed in the last 5 years. As xkcd put it a decade ago: "In CS, it can be hard to explain the difference between the easy and the virtually impossible." At the time, the path we saw to solving that "virtually impossible" task (recognizing birds) was to train a very expensive, very specialized neural net that would perform at maybe 85% success rate (to a human's 99%) and be useful for nothing else. Along came LLMs, and of course vision isn't even one of their strengths, but they can still execute this task quite well, along with any of a hundred similar vision tasks. And a million text tasks that were also considered even harder than recognizing birds - we at least had some experience training neural nets to recognize images, but there was no real forewarning for the emergent capability of writing coherent essays. If only we'd thought to attach power generators to AI skeptics' goalposts, we could have solved our energy needs as they zoomed into the distance.

When the world changes, is it "hype" to Notice?

Ahhh, you know, this makes perfect sense. His AI-skeptical post here, which had serious technical errors but somehow got a QC, matched very well with the arguments I've had with him before. Even down to the dubious (and prideful) claims of technical expertise. And the comparison of AI to animal intelligence (one heron, one orangutan).

Yup, just came here to mention XKCD. Gotta love the emdash in the disclaimer, too!

Eventually, most of the "real" challenges that humanity faces will be, at least in my opinion, rendered obsolete. That leaves just about only games to pass the time. They can be complicated games, they might be of relevance to the real world (status games, proof of work or competence), but they're still games we play because we've run out of options. I think this isn't a thing to complain about, once we get there. Our ancestors struggled to survive so that we wouldn't have to.

Forget "eventually"; I think we often fail to appreciate that we're already there, in the first world. Almost none of the "challenges" that our primitive ancestors faced are in any way familiar to us. They worried about whether they would starve next winter; I wonder whether I can justify being lazy and ordering Door Dash today. They might have been permanently crippled from an uncleaned surface cut; I would slap a band-aid on it and take a Tylenol. They banded together and learned to fight so the next tribe over wouldn't kill them all and take their stuff; I put my money into a stock brokerage.

Aging is IMO the one major challenge that hasn't been conquered yet (although we're still living twice as long as evolution intended). In almost every other way we're living the lives of Gods.

If you're going to lean so heavily on your credentials in robotics, then I agree with @rae or @SnapDragon that it's shameful to come in and be wrong, confidently and blatantly wrong, about such elementary things such as the reasons behind LLMs struggling with arithmetic. I lack any formal qualifications in ML, but even a dummy like me can see that. The fact that you can't, let's just say it raises eyebrows.

False humility. :) I have ML-related credentials (and I could tell that @rae does too), but I think you know more than me about the practicalities of LLMs, from all your eager experimentation and perusing the literature. And after all, argument from authority is generally unwelcome on this forum, but this topic is one where it's particularly ill-suited.

What "expertise" can anybody really claim on questions like:

  • What is intelligence? (Or "general intelligence", if you prefer.)
  • How does intelligence emerge from a clump of neurons?
  • Why do humans have it and animals (mostly) don't?
  • Are LLMs "minds that don't fit the pattern", or are we just anthropomorphizing and getting fooled by ELIZA 2.0?
  • If yes, how does intelligence emerge from a bunch of floating-point computations?
  • If no, what practical limitations does that put on their capability?
  • What will the future upper limit of LLM capability be?
  • Can AI experience qualia? (Do you experience qualia?)
  • Does AI have moral worth? (Can it suffer?)

With a decent layman's understanding of the topic, non-programmers can debate these things just as well as I can. Modern AI has caused philosophical and technical questions to collide in a wholly unprecedented way. Exciting!

Thanks for calling OP out on his flagrant errors. It's one thing to make a technical mistake on a non-technical forum; it's another thing entirely to flex, claim industry expertise, and then face-plant by confusing word embedding models with LLMs. I hope people aren't being misled by his, well, "hallucinations". (Honestly, that's an appropriate word for it! Incorrect facts being stated with complete confidence, just like an LLM.)

I'm reminded of the Obamacare debacle, which still fills me with rage. People (correctly) pointed out that women pay more for health insurance, and (incorrectly) said that this was an unfair "woman tax". It was politically brilliant, reframing the fact that women live longer as a societal injustice - against women! And it was 100% successful; Obamacare made gender-based pricing illegal, and now every man in the country is subsidizing the health care of every woman in the country. Forever.

I'm inclined towards your skeptical take - I think we as humans always fantasize that there are powerful people/beings out there who want to spend resources hurting us, when the real truth is that they simply don't care about you. Sure, the denizens of the future with access to your brainscan could simulate your mind for a billion subjective years without your consent. But why would they?

The problem is that there's always a risk that you're wrong, that there is some reason or motive in post-singularity society for people to irreversibly propagate your brainscan without your consent. And then you're at the mercy of Deep Time - you'd better hope that no beings that ever will exist will enjoy, uh, "playing" with your mind. (From this perspective, you won't even have the benefit of anonymity - as one of the earliest existing minds, it's easy to imagine some beings would find you "interesting".)

Maybe the risk is low, because this is the real world we're dealing with and it's never as good or bad as our imaginations can conjure. But you're talking about taking a (small, you argue) gamble with an almost unlimited downside. Imagine you had a nice comfortable house that just happened to be 100m away from a hellmouth. It's inactive, and there are guard rails, so it's hard to imagine you'd ever fall in. But unlikely things sometimes happen, and if you ever did, you would infinitely regret it forever. I don't think I'd want to live in that house! I'd probably move...

Listen, I did not intentionally trap those Sims in their living room. The placement of the stove was an innocent mistake. That fire could have happened anywhere! A terrible tragedy.

You know, sometimes pools just accidentally lose their exit. Common engineering mishap. My sincere condolences to those affected.

There's also the concern of what kind of suffering a post-singularity society can theoretically enable; it might go far, far beyond what anyone on Earth has experienced so far (in the same way that a rocket flying to the moon goes farther than a human jumping). Is a Universe where 99.999% of beings live sublime experiences but the other 0.001% end up in Ultrahell one that morally should exist?

Remember, in the game of chess you can never let your adversary see your pieces.

Well, I don't think your analogy of the Turing Test to a test for general intelligence is a good one. The reason the Turing Test is so popular is that it's a nice, objective, pass-or-fail test. Which makes it easy to apply - even if it's understood that it isn't perfectly correlated with AGI. (If you take HAL and force it to output a modem sound after every sentence it speaks, it fails the Turing Test every time, but that has nothing to do with its intelligence.)

Unfortunately we just don't have any simple definition or test for "general intelligence". You can't just ask questions across a variety of fields and declare "not intelligent" as soon as it fails one (or else humans would fail as soon as you asked them to rotate an 8-dimensional object in their head). I do agree that a proper test requires that we dynamically change the questions (so you can't just fit the AI to the test). But I think that, unavoidably, the test is going to boil down to a wishy-washy preponderance-of-evidence kind of thing. Hence everyone has their own vague definition of what "AGI" means to them; honestly, I'm fine with saying we're not there yet, but I'm also fine arguing that ChatGPT already satisfies it.

There are plenty of dynamic, "general", never-before-seen questions you can ask where ChatGPT does just fine! I do it all the time. The cherrypicking I'm referring to is, for example, the "how many Rs in strawberry" question, which is easy for us and hard for LLMs because of how they see tokens (and, also, I think humans are better at subitizing than LLMs). The fact that LLMs often get this wrong is a mark against them, but it's not iron-clad "proof" that they're not generally intelligent. (The channel AI Explained has a "Simple Bench" that I also don't really consider a proper test of AGI, because it's full of questions that are easy if you have embodied experience as a human. LLMs obviously do not.)

In the movie Phenomenon, rapidly listing mammals from A-Z is considered a sign of extreme intelligence. I can't do it without serious thought. ChatGPT does it instantly. In Bizarro ChatGPT world, somebody could write a cherrypicked blog post about how I do not have general intelligence.

FWIW, I appreciate this reply, and I'm sorry for persistently dogpiling you. We disagree (and I wrongly thought you weren't arguing in good faith), but I definitely could have done a better job of keeping it friendly. Thank you for your perspective.

Most frustratingly, the things that I actually need help on, the ones where I don't know really anything about the topic and a workable AI assistant would actually save me a ton of time, are precisely the cases where it fails hard (as in my examples where stuff doesn't even work at all).

That does sound like a real Catch-22. My queries are typically in C++/Rust/Python, which the models know backwards, forwards, and sideways. I can believe that there's still a real limit to how much an LLM can "learn" a new language/schema/API just by dumping docs into the prompt. (And I don't know anything about OpenAI's custom models, but I suspect they're just manipulating the prompt, not using RL.) And when an LLM doesn't know how to do something, there's a risk it will fake it (hallucinate). We're agreed there.

Maybe using the best models would help. Or maybe, given the speed things are improving, just try again next year. :)

What the hell? You most definitely did NOT give any evidence then. Nor in our first argument. I'm not asking so I can nitpick. I would genuinely like to see a somewhat-compact example of a modern LLM failing at code in a way that we both, as programmers, can agree "sucks".