site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 111643 results for

domain:doyourownresearch.substack.com

Agriculture generates hundreds of billions in revenue, and is far mor essential to continuing civilisation than Orangutan or LLMs are. Does that make grain, or the tools used to sow and harvest it "intelligent" in your eyes? If not please explain.

That is not a serious objection.

You’re comparing a resource (grain) and a tool of physical labor (a tractor) to a tool of intellectual labor. This is a false equivalence. We don't ask a field of wheat for its opinion on a legal contract. We don't ask a John Deere tractor to write a Python script to automate a business process. The billions of dollars generated by LLMs come from them performing tasks that, until very recently, could only be done by educated human minds. That is the fundamental difference. The value is derived from the processing and generation of complex information, not from being a physical commodity.

I'm just going to quote myself again:

ChatGPT 3.5 played chess at about 1800 elo. GPT 4 was a regression in that regard, most likely because OAI researchers realized that ~nobody needs their chatbot to play chess. That's better than Stockfish 4 but not 5. Stockfish 4 came out in 2013, though it certainly could have run on much older hardware.

If you really need to have your AI play chess, then you can trivially hook up an agentic model that makes API calls or directly operates Stockfish or Leela. Asking it to play chess "unaided" is like asking a human CEO to calculate the company's quarterly earnings on an abacus. They're intelligent not because they can do that, but because they know to delegate the task to a calculator (or an accountant).

Training LLMs to be good at chess is a waste of time. Compute doesn't grow on trees, and the researchers and engineers at these companies clearly made a (sensible) decision to spend it elsewhere.

The fact that an LLM can even play chess, understand the request, try to follow the rules, and then also write you a sonnet about the game, summarize the history of chess, and translate the rules into Swahili demonstrates a generality of intelligence that the Atari program completely lacks. The old program hasn't "devolved" into the new one; the new one is an entirely different class of entity that simply doesn't need to be optimized for that one, (practically) solved game.

The market isn't paying billions for a good chess player. There is about $0 to be gained by releasing a new, better model of chess bot. It's paying billions for a generalist intellect that can be applied to a near-infinite range of text-based problems. That's the point.

I came into this thread with every expectation of having a good-faith discussion/debate on the topic. My hopes seem dashed, mainly because you seem entirely unable to admit error.

Rae, SnapDragon, I (and probably several others) have pointed out glaring, fundamental errors in your modeling of how LLMs work. That would merit, at the very least, some kind of acknowledgement or correction. At the time of writing, I see none.

The closest you came to acknowledging fault is, in a reply to @Amadan, where you said that your explanation is "part" of why LLMs struggle with counting. That's eliding the point. Tokenization issues are the overwhelming majority of why they used to struggle, and your purported explanation has no bearing on reality.

You came into this swinging around your credentials, proceeded to make elementary errors, and seem to be closer to "Lorem Epsom", in that your primary concern seems to be prioritizing the appearance of correctness over actual substance.

I can't argue with @rae when he, correctly says:

I hope you realise you are more on the side of the Star Trek fan-forum user than the aerospace engineering enthusiast. Your post was basically the equivalent of saying a Soyuz rocket is propelled by gunpowder and then calling the correction a nitpick.

One silver lining to rising costs of things is that I'm seeing more and more shops explicitly showing their payment processing fees, and offering discounts for cash again.

Thank you. I really appreciate the kind words. I hope you don't mind if you get added to my mental rolodex of useful experts to summon, it's getting lonely with just faul_sname in there (I've already pinged him enough).

You don’t want a non-masculine mid to ever be professing “fascism” in public.

And yet, Goebbels.

No, the major ones in the public imagination (Spain, Italy, Germany) were as much or more in reaction to powerful, organized, and street-level-thuggish communist parties in their countries than they were a backlash against old aristocracy. In fact, a major reason the fascists beat the communists was that the old aristocracy lined up behind the fascists, on the theory that anything was better than getting expropriated and lined up against a wall by bolsheviks.

Is there an example of a near-fascist state with significant ethnic diversity that's succeeded ?

Depends on what you mean by "succeeded", but Getulio Vargas in Brazil comes to mind as a potential example here. And Salazar in Portugal wasn't ultimately successful - his regime didn't outlive him - but lusotropicalism was the opposite of ethnically-exclusive; Salazar envisaged Angola, Mozambique, Goa, Timor, etc. as integral parts of Portugal itself.

This isn't a unique system, though. Maybe the degree of adversarial-ness is, but there are plenty of sub-state level actors with differing degrees of autonomy. American Samoa issues its own passports, but isn't an independent state or full protectorate ("nationals, not citizens"). New Caledonia has a somewhat similar arrangement. And it's not all obviously-colonial arrangements either: the Crown Dependencies of the UK don't seem to have active independence movements that I've heard of, but seem about as sovereign (perhaps with fewer border checkpoints) as the PA in the West Bank is on paper.

There's an important kind of intelligence that apes lack but LLMs possess.

There are even kinds of intelligence apes possess that humans lack. Particularly, short term spatial memory: sequentially flash the numbers 1 through 9 on a touchscreen monitor at random positions, and have the subject then press the monitor at those positions in order. Chimpanzees, even young chimpanzees, consistently and substantially outperform adult undergraduate humans, even when you try to incentivize the human. Does that mean chimps are smarter than humans?

Intelligence is very spiky. It's weird, but different substrates of intelligence lend themselves best to different tasks.

We're talking past each other, and I'm at fault.

When I say starvation, I imagine a famine where people are dying in droves. Deadly famines were a part of life in the Indian Subcontinent until the 1980s. Today, chronic wasting and stunting remain commonplace.

On further reflection, I'm being plain heartless. Years of walking past beggars under the bridge has stripped me of humanity. Just because starvation is common in the subcontinent, doesn't mean I should withhold my sympathy for the Gazans. It's true that the world only cares when Europeans(ish) are dying. I'm sour about it, no doubt. But, sympathies aren't zero sum.

Are Gazans starving ? Not yet at least

I'm still right going by my definition of starvation. But, it's a moot definition. Shouldn't have to wait for the situation to turn into a biblical locust-plague before it can be called starvation.

Bruh, that … correctly points out what I’m saying.

A publication is ironically using disinformation about the forum.

I'm saying that purely based on in-text information (how long does a fiction book say it takes to drive from LA to San Francisco, LA is stated to be within California, etc) you could probably approximate the geography of the US just fine from the training data, let alone the more subtle or latent geographic distinctions embedded within otherwise regular text (like who says pop vs soda or whatever). Both of which the training process actually does attempt to do. In other words, memorization. This has no bearing on understanding spatial mappings as a concept, and absolutely no bearing on whether an LLM can understand cause and effect. Obviously by world state, we're not talking the literal world/planet, that's like calling earth science the science of dirt only. YoungAchamian has a decent definition upthread. We're talking about laws-based understanding, that goes beyond facts-based memorization.

(Please let's not get into a religion rabbit hole, but I know this is possible to some extent even for humans because there are a few "maps" floating around of cities and their relative relationships based purely on sparse in-text references of the Book of Mormon! And the training corpus for LLMs is many orders of magnitude more than a few hundred pages)

Perhaps an example/analogy would be helpful. Consider a spatial mapping as a network with nodes and strings between nodes. If the strings are only of moderate to low stretchiness, there is only one configuration in (let's say 2D) space that the network can manifest (i.e. correct placement of the nodes), based purely on the nodes and string length information, assuming a sufficiently large number of nodes and even a moderately non-sparse set of strings. That's what the AI learns, so to speak. However, if I now take a new node, disconnected, but still on the same plane, and ask the AI to do some basic reasoning about it, it will get confused. There's no point of reference, no string to lead to another node! Because it can only follow the strings, maybe even stop partway along a string, but it cannot "see" the space as an actual 2D map, generalized outside the bounds of the nodes. A proper world state understanding would have no problem with the same reasoning.

So on all those notes, your example does not match your claim at all.

Now I get what you're saying about how the semantic clouds might be the actual way brains work, and that might be true for some more abstract subjects or concepts, but as a general rule obviously spatial reasoning in humans is way, way more advanced than vague concept mapping, and LLMs definitively do not have that maturity. (Spatial reasoning in humans is obviously pretty solid, but time reasoning is actually kind of bad for humans, e.g. people being bad at remembering history dates and putting them in a larger framework, the fallibility of personal memory, and so on but that's kind of worth its own thought separate from our discussion). Also I should say that artificial neural networks are not brain neural networks in super important ways, so let's not get too carried away there. Ultimately, humans learn not only via factual association, but experimentation, and LLMs have literally zero method of learning from experimentation. At the moment, at least, they aren't auto-corrective by their very structure. Yes, I think there's a significant difference between that and the RLHF family. And again this is why I harp on "memory" so much as being perhaps a necessary piece of a more adaptable kind of intelligence, because that's doing a really big amount of heavy lifting as you get quite a variety of things both conscious and unconscious that manage to make it into "long term memory" from working memory - but with shortcuts and caches and stuff too along the way.

And again these are basics for most living things. I know it's a vision model, but did you at least glance at the video I linked above? The understanding is brittle. Now, you could argue that the models have a true understanding, but are held back by statistical associations that interfere with the emergent accurate reasoning (models commonly do things like flip left and right which IRL would never happen and is completely illogical, or in the video shapes change from circle to square), but to me that's a distinctly less likely scenario than the more obvious one, which also lines up with the machine learning field more broadly: generalization is hard, and it sucks, and the AI can't actually do it when the rubber hits the road with the kind of accuracy you'd expect if it actually generalized.

Of course it's admittedly a little difficult to tease out if a model is doing bad for technical reasons, or for general reasons, and also difficult to tease out good out of sample generalization cases because the memorization is so good, but I think there is good reason to be skeptical of world model claims from LLMs. So I'm open to this changing in the future, I'm definitely not closing the door, but where frontier models are at right now? Ehhhh, I don't think so. To be clear, as I said upthread, both experts and reasonable people disagree if we're seeing glimmers of true understanding/world models, or just really great statistical deduction. And to be even more clear, it's my opinion that the body of evidence is against it, but it's more along the lines of a fact that your example of geospatial learning is not a good piece of evidence in favor, which is what I wanted to emphasize here.

Edit: Because I don't want to oversell the evidence against. There are some weird findings that cut both ways. Here's an interesting summary of some without meaning to: for example, Claude when adding two two-digit numbers will say it follows the standard algorithm; I initially thought it would just memorize it; but it turns out that while both were probably factors, it's more likely Claude figured out the last digit, and then combined that thought-chain after the fact with an estimation of the approximate answer. Weird! Claude "plans ahead" for rhymes, too, but I find this a little weak. At any rate you'd be well served by checking the Limitations sections where it's clear that even a few seemingly slam-dunk examples have more uncertainty than you might think, for a wider array of reasons than you might think.

It’s disproportionate, there’s no viable objective, Israel’s intention is to ethnically cleanse the land, and there’s no legitimate reason to be punishing the civilian populations by withholding aid or firing on civilians attempting to obtain aid.

It’s defensive.

It’s to no longer be attacked, to get back hostage bodies, and to be safe in the future.

It isn’t - if those Palestinians were instead Germans or Chinese, this wouldn’t be happening. It is happening because instead they have a death cult attacking them for decades at their doorstep.

The aid is constantly going to Hamas. Outside of very specific incidences, Israel is not firing on civilians. And certainly not on purpose or wholesale.

The Gazan population has grown.

Nothing you’re saying is accurate or true.

hospitals have admitted people in a state of severe exhaustion caused by a lack of food.”

This is a warzone and you’re sending out anti Israeli propaganda. That the place even has hospitals is amazing.

If you're going to lean so heavily on your credentials in robotics, then I agree with @rae or @SnapDragon that it's shameful to come in and be wrong, confidently and blatantly wrong, about such elementary things such as the reasons behind LLMs struggling with arithmetic. I lack any formal qualifications in ML, but even a dummy like me can see that. The fact that you can't, let's just say it raises eyebrows.

False humility. :) I have ML-related credentials (and I could tell that @rae does too), but I think you know more than me about the practicalities of LLMs, from all your eager experimentation and perusing the literature. And after all, argument from authority is generally unwelcome on this forum, but this topic is one where it's particularly ill-suited.

What "expertise" can anybody really claim on questions like:

  • What is intelligence? (Or "general intelligence", if you prefer.)
  • How does intelligence emerge from a clump of neurons?
  • Why do humans have it and animals (mostly) don't?
  • Are LLMs "minds that don't fit the pattern", or are we just anthropomorphizing and getting fooled by ELIZA 2.0?
  • If yes, how does intelligence emerge from a bunch of floating-point computations?
  • If no, what practical limitations does that put on their capability?
  • What will the future upper limit of LLM capability be?
  • Can AI experience qualia? (Do you experience qualia?)
  • Does AI have moral worth? (Can it suffer?)

With a decent layman's understanding of the topic, non-programmers can debate these things just as well as I can. Modern AI has caused philosophical and technical questions to collide in a wholly unprecedented way. Exciting!

Cambodia and Thailand are fighting over disputed borders. Apparently, the International Court of Justice already ruled sixty years ago, and again ten years ago, that Cambodia is in the right, but Thailand has ignored those rulings.

It's not all bullshit.

Which half though?

I dug into Girard just a little bit because of his recent influence on important people and came away with a strong condemnation of his entire process as incredibly moronic and I can't understand why he's given the time of day by otherwise intelligent people. "People's desires are influenced by their perception of what is desired by others" is not exactly a novel contribution to human psychology.

I can, in contrast, understand why Marx has had the influence he has had, in terms of his writings and in terms of the mechanics of the rise of the USSR.

I read Russell's A History of Western Philosophy in my early 20s and that did not help me here. Continentalists seem to get very mad at Analyticals misrepresenting them, without themselves having a consensus about what was "really" meant by any given thinker.

Maybe I missed something, but Light was not motivated by just a desire for power, and especially at first the idea seems to be that he only wants the Death Note to kill criminals, but really doesn’t go after anyone else unless they’re trying to catch him or he needs to confuse L. It seems a bit more like the Death Note sort of takes over after a while in the sense that power goes to his head. I read Light mostly as a tragic story of huberis in which the power to destroy human life becomes the power to play God and remake everything into your vision of Justice.

L never came off that well in the story for me. It was just a guy who loved the mystery and found the whole thing to be a fascinating game. He had no moral reason to want to stop Light. He just wanted to catch Kira because it was a difficult case to solve.

If you exclude civilian ship crews, the total number of US civilian deaths in WWII is around 100, and single-digits if you only count state territories at the time (not Alaska or Hawaii). British civilian deaths, despite the Blitz, were still pretty small compared to Germany and Japan. Civilian casualty ratios are a terrible metric unless you want to be an Axis (or Soviet) apologist.

Or perhaps your enemy is good at hiding amongst civilians, but bad at killing their opponents.

Keep in mind how many rockets were launched by Hamas from Gaza against Israel with the intent to kill civilians. Just looking at the deaths without considering the causation of the numbers leads to poor judgements. Context matters.

You can't assign immorality to the side with greater competence against the side with demonstrated malicious intent with a low success rates.

Let's put it another way. How many Israeli combatants died in the recent war with Iran? How many Iranian civilians?

Good luck dividing by zero.

I mean more long-game. I suppose my view of Palestine is colored by my run-ins with it's propaganda and activist-industrial complex.

Yeah but the hacker anonymous runs them all.

Idk if I believe language possesses all the necessary info for a world model. I think Humans interpret language through their world model which might give us a bias towards seeing language like that. Just like intelligence, humans are social creatures we view the mastery of language as a sign of intelligence. A LLM's apparent mastery of language gives people the feel that it is intelligent. But that's a very anthropocentric conception of language and one that is very biased towards how we evolved.

As for why some prominent AI scientists believe vs others that do not? I think some people definitely get wrapped up in visions and fantasies of grandeur. Which is advantageous when you need to sell an idea to a VC or someone with money, convince someone to work for you, etc. You need to believe it! That passion, that vision, is infectious. I think it's just orthogonal to reality and to what makes them a great AI scientist.

This is facilely circular.

It really does seem like we’re seeing propaganda at work here.

Sam Harris (who lost his mind in 2016) calling Islam a death cult is the forever correct thing.

Until a time that religion is as neutered as present day American Christianity, they should hold no place at the civilization table.

ymeskhout

Wasn't he pretty clear about being tired of dealing with certain views that would simply not respond to evidence?

Less of a "personal" thing or "flameout" but in the same vein.

Just simplifying a bit, there's the whole thing about zones, and of course plenty of "interstate commerce" as it were. But ultimately the PA are in charge because the Israelis let them be in charge, as the zone system demonstrates with great clarity. Of course I'd still say that the Palestinians themselves should have more urgency in trying to reform or replace the PA with something better, we shouldn't let them off the hook, but the PA is far from a full-fledged state, even laying military matters aside. The Israelis have effective veto power over the broad strokes of what they do.

Israels behaviour has taught a sizeable portion of goyim what jewish mindset is and that the jewish view on this is fundamentally incompatible with a western mindset.

There’s never been a country at war going so softly.

Never has there been such pains to not kill civilians.

What you think is honestly morally reprehensible - which is fine! It is what it is.

If any ‘ goyim ‘ sees it this way then it’s due to MSM insane-washing Islam and Islamists.