@TequilaMockingbird's banner p

TequilaMockingbird

Brown-skinned Fascist MAGA boot-licker

1 follower   follows 0 users  
joined 2024 June 08 03:50:33 UTC

				

User ID: 3097

TequilaMockingbird

Brown-skinned Fascist MAGA boot-licker

1 follower   follows 0 users   joined 2024 June 08 03:50:33 UTC

					

No bio...


					

User ID: 3097

But we don't exclude them do we?

More like saying that the soyuz rocket is propelled by expanding combustion gasses only for somone to pop in and say no, its actually propelled by a mixture of kerosene and liquid oxygen. As i said in my reply below, you and @self_made_human are both talking about vector based embedding like its something that a couple guys tried in back in 2013 and nobody ever used again rather than a methodology that would go on to become a defacto standard approach across multiple applications. You're acting like if you open up the source code for a transformer you aren't going to find loads of matrix math for for doing vector transformations.

The old cliche about asking whether a submarine can swim is part of why I made a point to set out my parameters at the beginning, how about you set out yours.

Well said.

C'mon dude. If this is the third draft of the essay, I really expect more substantial rebuttal than this.

You misunderstand me. My response was not the third revision, it was the third attempt.

I don't know if you realize this, but you come across as extremely condescending and passive-agressive in text. It really is quite infuriating. I would sit down, start crafting a response, and as i worked through your post i would just get more angry/frustrated until getting to the point where id have to step away from the computer lest i lose my temper and say something that would get me moderated.

And that illustration was wrong.

As i acknowledged in my reply to @Amadan it would have been more accurate to say that it is part of why LLMs are bad at counting, but I am going to maintain that no, it is not "wrong". You and @rae are both talking about vector based embedding like its something that a couple guys tried in back in 2013 and nobody ever used again rather than a methodology that would go on to become a defacto standard approach across multiple applications. You're acting like if you open up the source code for a transformer you aren't going to find loads of matrix math for for doing vector transformations.

Why is the opinion of the "average American" the only standard by which to recognize AGI?

Why isn't it a valid standard? You are the one who's been accusing society of moving the goalposts on you. "the goalposts haven't actually moved" seems like a fairly reasonable rebuttal to me.

I had forgotten how much of your previous weak critique to the same evidence was based off naked credentialism. After all, you claimed:

I understand how my statements could be interpreted that way, but at the same time I am also one of the guys in my company who's been lobbying to drop degree requirements from hiring. I see myself as subscribing to the old hacker ethos of "show me the code". Its not about credentials its about whether you can produce tangible results.

The companies that spend hundreds of billions of dollars on AI are doing just fine.

For a given definition of fine, i still think OpenAI and Anthropic are grifters more than they are engineers but I guess we'll just have to see who gets there first.

As i have said in prior discussions of the topic, I fully believe that AGI is possible and even likely within my lifetime, but I am also deeply skeptical of the claims made by both AI boosters and AI doomers for the reasons stated above.

The basic methodology is still widely used today, GPT 4.0 and DeepSeek R1 being two modern examples.

Agriculture generates hundreds of billions in revenue, and is far mor essential to continuing civilisation than Orangutan or LLMs are. Does that make grain, or the tools used to sow and harvest it "intelligent" in your eyes? If not please explain.

As for comparing like to like, GPT loses games of Chess to an Atari 2700. Does that mean that rather than progressing AI has actually devolved over the last 40 years?

In the interest of full disclosure, I've sat down to write a reply to you three times now, and the previous two time I ended up figuratively crumpling the reply up and throwing it away in frustration because I'm getting the impression that you didn't actually read or try to engage with my post so much as just skimmed it looking for nits to pick.

You your whole post is littered with asides like.

Calling them Large "Language" Models is a gross misnomer these days, when they accept not just text, but audio, images, video or even protein structure

When I had very explicitly stated "Now in actual practice these tokens can be anything, an image, an audio-clip, or a snippet of computer code, but for the purposes of this discussion I am going to assume that we are working with words/text."

and

Operations are not limited to dot products

When I had very explicitly stated that "Any operation that you might do on a vector" could now be done on the token. So on and so forth.

You go on a whole tangent trying to explain how I need to understand that people do not interact with the LLM directly when I very explicitly stated that "most publicly available "LLMs" are not just an LLM. They are an LLM plus an additional interface layer that sits between the user and the actual language model."

And trust me, I am fully aware that “Mary has 2 children” and “Mary has 1024 children” are empirically distinct claims, I don't need you to point that out to me. The point of the example was not to claim that the numbers 2 and 1024 are literally indistinguishable from each other. The point was to illustrate a common failure mode and explain why LLMs often struggle with relatively simple tasks like counting.

With that out of the way...
I find your fish vs birds and judging whales by their ability to climb trees examples unconvincing for the same reasons as @Amadan below.

In the post that the OP started as a reply to, you accused society of "moving the goalposts" on AI progress but I disagree.

If you ask the average American about "AGI" or "AI Risk" what are the images that come to mind? It's Skynet from The Terminator, Cortana from Halo, Data from Star Trek TNG, the Replicants from Blade Runner, or GLaDOS from Portal. They or something like them is where goalposts are and have been for the last century. What do they all have in common? Agentic behavior. It's what makes them characters and not just another computer. So yes my definition of intelligence relies heavily on agentic behavior, and that is by design. Whether you are trying to build a full on robot out of Asimov, or something substantially less ambitious like a self-driving car or autonomous package sorter, agentic behavior is going to a key deliverable. Accordingly I would dismiss any definition of "intelligence" (artificial or otherwise) that did not include it as unfit for purpose.

You say things like "Saying an LLM is unintelligent because its weights are frozen is like saying a book is unintelligent." and I actually agree with that statement. No a book is not "intelligent" and neither is a pocket calculator, even if it is demonstrably better at arithmetic than any human.

You keep claiming that my definition of "intelligence" is inadequate and hobbling my understanding but I get the impression that I have a much clearer idea of both where we are and where we are trying to get to in spite of this.

If you think you have a better solution present it, as I said one of the first steps to solving any practical engineering problem is to determine your parameters.

Moving on, the claim that LLMs "know" when they are lying or hallucinating is something you and I have discussed before. The claim manages to be trivially true while providing no actionable solution for reasons already described in the OP.

The LessWrong stuff is not even wrong, and I find it astonishingly naive of you to assume that the simple human preference for truth is any match for Lorem Epsom. To volley one of your own favorite retorts back at you. "Have you met people".

Perhaps it would've been more accurate of me to say "This is part of the reason why LLMs have such difficulty counting..."

But even if you configure your model to treat each individual character as its own token, it is still going to struggle with counting and other basic mathematical operations in large part for the reasons I describe.

Im not sure if it's fair to say it "destroys" anything, but it certainly fails to capture certain sorts of things and in the end the result is the same.

A lot of the frustration I've experienced, stems from these sorts issues where some guy who spends more time writing for thier substack than they do writing code dismisses issues such as those described in the section on Lorem Epsom as trivialities that will soon be rendered moot by Moore's Law. No bro they wont, If you're serious about "AI Alignment" solving those sort of issues is going to be something like 90% of the actual work.

As for the "foom" scenario, i am extremely skeptical but i could also be wrong.

The hilarious thing about this for me is that I have literally used "You ask the LLM to "minimize resource utilization" and it deletes your Repo" as an example in training for new hires.

...and this children is why you need to be mindful of your prompts, and pushes to "master" require multi-factor authentication.

Oook?

Its not that Marx neccesarily supported Wokeism so much as the Woke copied the Marxists' homework and flipped few of the words around in the hopes the teacher wouldn't notice. The identitarian left literally used to describe thier ideology as "Cultural Marxism" back in the 90s.

Also, ending with an "In conclusion..." paragraph will make people assume you used AI in your writing.

Ironic given the context.

Apologies is self-promotion is frowned upon, but i'm posting this here for visibility and because it started as a reply to @self_made_human's post on AI Assistants from last week's culture-war thread.

Is your "AI Assistant" smarter than an Orangutan? A practical engineering assessment

Trump isn't a Clinton Democrat as much as he is a 90s era labour-oriented centerist. IE the sort of "old Democrat" that the Clintons and thier "New Democrat" coalition displaced.

Americans do not want to do it...

...for the wages being offered

That's my point.

I think you're getting too hung up on the "rural" component and not paying enough attention to the economic.

Thank you, for articulating what I was thinking, but likely would have gotten banned for saying because i would've been a lot less articulate or polite in doing so.

Is that something you see happening?

AlexanderTurok, You claim that you are "anti third-worldism", but if that is true, why have you consistently aligning yourself with those who are trying to make the US more like a third-world country against those who want to make it great?

It wasn't MAGA that turned San Francisco into a fecese-strewn open-air drug market. It wasn't MAGA that worked behind the scenes to put a dementia patient in the Whitehouse. And it is not MAGA that has been marching in solidarity with HAMAS, shooting at federal officers, or trying to put a Communist in Gracie Mansion. It is your "Elite Human Capital" doing all of that.

The whole "Immigrants are just doing the jobs Americans wont do" is a blatant lie. There is no industry in these United States where the majority of workers and illegal/undocumented. Not even seasonal agriculture during the height of the Biden surge. The truth is "American don't want to do those jobs for those wages" and that is what this is (and has always been) about, wages.The Plantation owners don't want to pay the help, and once again the Democrats (who have always been the Party of the Plantation Owners) are threating civil war if they are not allowed to continue importing and exploiting thier non-citizen underclass.

Who's narrative?

The observation that BLM protests are often whiter than the police departments they're protesting is a running joke in southern states.

Define "we"

You're putting far too much into your interpretation of what I initially said. That's the polite way to put it, because it's a lot of putting words in my mouth that I never said.

Again, if you feel that i have been uncharitable, perhaps you should take a moment because all i did was volley your own argument (almost word for word) right back at you.

My point is clearly that humans, even the "best" humans, aren't immune to the same accusation.

And this is supposed to be an argument for trusting AI over human judgment? It seems to me that you are doing the inverse of what you accused me of doing. Arguing that ecause humans are less than 100% reliable they must be useless.

What does you being a math nerd have to do with anything?

Because it means being prone to a certain sort of thought-process where you examine every assumption and follow every assertion to its conclusion.

Modern electronics are some of the most robust and error-resistant physical devices to ever exist,

This claim is simply false. I've worked with legacy electronics and there is no comparison. Modern electronics are no where near as robust or fault tolerant they are just light enough and cheap enough that providing multiple redundancy is reasonable by comparison.

You claim LLMs are "unreliable by design". This is a misunderstanding of what they are.

No it is a description of how they work, the essence of the Epsom vs Knuthian approach described in the video essay i was referring to.

Meanwhile you are still not engaging with my point. You have not given any indication that you care about accuracy or reliability and instead (by chosing to use the trick calculator over doing the math yourself) you have strongly implied that you do not care about such things at all.

I hate it because it is rhetorically equivalent to the old "...and you are still lynching niggers". It's not an explanation or excuse, nor does it adress the issue being raised, it is a deflection and a put-down.

I find the appeal to hypocrisy not only uncomplelling but actively off-putting as the hypocrite at least acknowledges that they are in wrong.