TequilaMockingbird
Brown-skinned Fascist MAGA boot-licker
No bio...
User ID: 3097
There is a whole genre of portable predictive models from companies like Raytheon, Nvidia, IBM, L3Harris, Et Al. But they rarely get discussed because they are not flashy or accessible in the way that LLMs like ChatGPT are. You have to buy the license, download the model, and then train it yourself. But these models increasingly represent the foundational infrastructure behind things like this
I imagine there is a supervising algorithms engineer somewhere who is torn between finding this absolutely hilarious and cursing the suits for listening to the wordcels in marketing over him.
Imagine a a trick abacus where the beads move on thier own their own via some pseudorandom process, or a pocket calculator where digits are guaranteed to a +/- 1 range. IE you plug in "243 + 67 =" and more often then not you get the answer "320" but you might just as well get the answer "310", "321" or "420". After all, the difference between all of those numbers is very small. Only one digit, and that digit is only off by one.
Now imagine you work in a field where numbers are important, you lives depend on getting this math right. Or maybe you're just doing your taxes, and the Government is going to ruin you if the accounts don't add up.
Are you going to use the trick calculator? If not, why not?
As a math nerd I seriously despise this line of argument as it ultimately reduces to a fully generalized argument against "true", "false", and "accuracy" as meaningful concepts.
Stuff like this is why I roll my eyes when i see junior programmers complaining online about how thier stupid employer wont let them use the latest AI tools/models.
There are often very good reasons that they don't want you to be using those tools.
I see you.
Can you elaborate on what you think words like "read", "searches", and "know" mean in this context. Im not asking just to pedantic, how you think about this question has informs how you approach algorithmic behavior.
Edit: if that is a bit too abstract instead try explain why you believe that the algo "knows" which claims are likely spurious and then explain why you would expect that to have any influence on the algorithm's output.
For anyone who is sincerely interested in the topic, I strongly recommend Tom Murphy VII's video essays, particularly Badness = 0 as a primer on the techical challenges and not just for the excellent "alignment" meta joke.
The portion about Lorem Epsom and Donald Knuth is particularly relevant when discussing publicly available LLMs like GPT, Gemini, and DeepSeek.
Again, its not "naive" it is generating an average if the bulk of the tokenized training data related to your prompt is press releases, the response is going to reflect the press releases. Whether those press releases are true or false doesn't enter into the equation. This is expected.
Common well publicized problems have common well publicized solutions, if your traing data consists of 90-somthing percent correct answers and reminder garbage you will get a 90-somthing percent solution.
As i said above Gemini is not reasoning or naive, it is computing an average. Now as much as i may seem down on LLMs, I am not. I may not believe that they represent viable path towards AGi but that doesn't mean they are without use. The rapid collation of related tokens has an obvious "killer app" and that app is translation be that in spoken languages or programming languages.
https://www.themotte.org/post/1160/culture-war-roundup-for-the-week/249920?context=8#context
They are, but the latest predictive models are a completely seperate evolutionary branch from LLMs
I believe tha AGI is possible and is likely happen, but I also believe that Sam Altman is an inveterate grifter and the generative large language models are (for the most part) an evolutionary dead-end.
And my point is that anyone who was remotely intelligent and vaguely familiar with both the internet and how LLMs function ought to have anticipated this.
The OP is the kind of person who is surprised when "Boaty McBoatface" wins the online naming poll.
It's not "naive" it's generating an average. If your training data is full of extraneous material (or otherwise insufficiently tokenized/vetted) your response will also be full of extraneous material, and again its not rationalizing it's averaging.
At the risk of a self-dox, I have an advanced degree in Applied Math, and multiple published papers and patents related to the use of machine learning in robotics and signal processing. I was introduced to the rationalist community through a mutual friend in the SCA and was initally excited by the opportunity to discuss the philosophical and engineering challenges of developing artificial intelligence. However as time went on i largely gave up trying to discuss AI with people outside the industry as it became increasingly apparent to me that most rationalists were more interested in the use of AI as a conceptual vehicle to push thier particular brand of Silicon Valley woo than they were the aforementioned philosophical and engineering challenges.
The reason i don't talk about it is in large part that i find it difficult to speak honestly without sounding uncharitable. I believe that the "wordcels" take these bots seriously because they naturally associate "the ability to string words together" with intent/sentience while simultaneously lacking sufficient background knowledge and/or understanding of algorithmic behavior to recognize that everthing the OP describes lies well within the bounds of expected behavior. See the post from a few weeks ago where people thought that GPT was engaged in "code-switching". What the lay-man interperts as intent is to the mathematician the functional output of the equation as described.
The obvious question to as is If the Serbians successfully shot down a B2 (r a second F117) why weren't they plastering pictures of the wreckage all over the media the way they did with the first F117? The simplest explanation would seem to be that they didn't actually down the aircraft in question.
The serbians probably fired a missile, saw an explosion, and assumed that meant a kill when in reality the aircraft in question made it safely back to base, with a few "sparrows" in the wing.
If a "genocide" is still ongoing after 4 generations it's not much of a "genocide" is it?
Have not doesn't mean they will not.
No that is not the lesson, there is nothing to fear from a "low performer". What you need to fear is the person or group who you dismissed as low performing but have the potential to not be, because if you fuck em there is a good chance they'll fuck you back and you will deserve it.
Given the two attempts at invasion/annexation in 10 years (one of them ongoing) it seems reasonable that the Ukrainians would not want ZZ-niks working in thier country, voting in thier elections, etc.
Remember that we are talking about naturalization here IE whether or not we let a person in, and once in, how much of an obligation is there to let them stay.
What MAGA was/is against is yet more on-going foreign entanglements consuming blood and treasure for little gain. See ou (the US's recent experience in) Afgahnistan, Iraq, Libya, Syria, Gaza, Et Al.
A quick surgical strike followed almost immediately by a negotiated peace is pretty much the exact opposite of an on-going entanglement.
You don't even have to be pro-Trump, you just have to be pro-'Murica. A bridge that Democrats are increasingly loath to cross. Hense the whole 1619 project and endless thinkpieces about how America isnt exceptional.
Yes, but if they'd admitted to being a Nazi, they wouldn't have been naturalized.
Possibly, Probably. and the HAMASniks would have likely (or at least ought to have been) denied entry if they had gone into thier naturalization hearing chanting "death to America" and "globalize the infitada".
Have you ever aligned yourself with an enemy of the United States, if so explain the circumstances. is exactly the sort of question we ought to be asking someone before letting them in.
You did not say "no", as such i find it disingenuous of you to suddenly back-pedal and claim to care about reliability after the the fact.
Buddy, have you seen humans?
Humans are unreliable. You are a human are you not? You have not given any indication that you care about accuracy or reliability and instead (by chosing to use the trick calculator over doing the math yourself) have strongly implied that you do not care about such things.
Now if you feel that I've been unfairly dismissive, antagonistic, or uncharitable in my response towards you then perhapse then you might begin to grasp why i hate the whole "bUt HuMaNs ArE FaLaBlE ToO UwU" argument with such a passion. Im not claiming that LLMs are unreliable because they are "less than perfect" i am claiming that they are unreliable because they are not only unreliable, but unreliable by design. I know its long but seriously watch the video essay on Badness = 0 I posted up thread. It is highly relevant to this conversation.
More options
Context Copy link