TequilaMockingbird
Brown-skinned Fascist MAGA boot-licker
No bio...
User ID: 3097
What is it supposed to prove? It is supposed to prove that the economy under biden was not all sunshine and cute woodland creatures. That the official definition of recession used by NBER, the FED, Et Al. was revised from "more than three consecutive months of negative growth" to "more than one full calandar quarter" the same week Biden would've crossed the old three month threshold.
That is that, the definition was revised such that declining growth for the last 2 months in Q2 plus the next 2 months of Q3 would not count because even if together they constitute 4 consecutive months of negative growth (a quarter being 3 months long), niether of the 2 two month blocks together would constitute "more than a full calendar quarter".
It wasn't a story for the same reason Trump's tariffs are a story.
Only if you believe that markets are universally rational and efficient in practice not just theory.
The best security is still a locked door and an armed guard where only a few people have the key, and the guard is not one of them.
Hanania occupies the same memetic ecological niche as guys like David French.
His role is to write things that flatter the sensibilities and biases of the priestly caste from an ostensibly outside view so that his readers can then pat themselves on the back for being so reasonable. Reassuring themselves that thier opponents are indeed stupid, and don't have any legitimate concerns by telling themselves "I listen to people with outside views and they agree with me on all the important things. We even boo the same outgroup!"
In this case the outgroup is the Tea-Party/MAGA right to which Hanania is opposed, and people who don't trust "the media" or "the experts" classes with which Hanania identifies as a member.
His characterization of Rogan is particularly egregious. Rogan wasn't "radicalized" so much as he was told to "get the fuck out" by all the "serious people". He did, and he took a lot of his listeners with him. Now all the "serious people" like Hanania are shocked and appaled to see former Bernie-Bros walking around wearing MAGA hats because while he was berating them, the populist right was telling them "pull up a stool and watch the game with us". The same thing happened with RFK Jr.
So we're just throwing the spring and summer of 2022 down the memory-hole are we?
I do not believe you.
I believe that to the degree that substitution might be difficult it is difficult because affluent blue tribers on the coast want it to be difficult, and actively work to make it so.
I believe that "They are doing jobs Americans wont" is code for "I don't think I should have to pay 'the help' the going rate" and "I don't want an emplyee with legal rights and delusions of equality, I want a serf I can exploit"
Finally i beleive that @coffee_enjoyer is correct that roofing companies will start increasing the wages and quality of life they offer thier employees before we go without roofs. If they don't, screw 'em.
As @Primaprimaprima observes, I think a moral judgment towards those unwilling or too lazy to support themselves is one of the distinguishing features of Tea-Party/MAGA right. That there is dignity and virtue in hard work and doing the needful but dirty job. That the slothful degenerate should be either pittied or whipped into shape rather than catered to. The "Gods of the Copybook Headings" are real and walk among us.
Also, it seems I have been blocked by this user. That's news to me.
60% chance of recession seems far too low given that the priestly caste is fully capable of manipulating the definition of "recession" to punish Trump just as they did to protect Biden.
For all intents and purposes, yes. This (among other things) is what i voted for.
Democrats' rhetoric surrounding immigration and wages has always stood out to me as an obvious example of politically-motivated doublethink. "The experts" are asking us to hold two contradictory axioms simultaneously. One is that maintaining a supply of "off the books" labor is essential to the survival of multiple industries (such as roofing and agriculture) and that ideally we should be increasing the supply of labor to reduce costs (ie wages) even further. The other is that the available supply of labor has has little if any effect on wages (ie costs).
This allows the Democrat to maintain a smug confidence in thier own intellectual and class superiority through convincing themselves that the working class only opposes immigration because they are a bunch of ignorant racist hicks who do not understand the nuances of economic theory, and have been "tricked" by men like Trump into voting against thier own interests, rather than people with legitimate grievences and concerns, who don't like seeing thier wages under-cut and culture denigrated.
I also agree that the moral judgment towards willingness to work and "going without" is one of the core ideological differences between the Tea-Party/MAGA right and other political factions within the US.
Im confused.
Why would you expect the EU-based management of Stellantis to be in favor of US tariffs? (Or Trump for that matter?) I would expect them to be opposed.
And you believed them when they said this is about specifically about Tariffs and definitely not because of volatility in the stock market or because this Friday is the new quarter?
This is "optics" pure and simple.
900 employees in a company of over 200,000 is less than 0.45% of the workforce.
A company the size of Stellantis picks up or lets go a thousand employees a month in the course of normal operations, but Rueters isn't about to let that distract them from the important story of "Orange Man Bad".
I think you might need to watch or read The Big Short.
Because the collection and tokenization of reference material is currently a significant bottleneck. The democratization of it, or ability to do so organically would make a number of different approaches substantially more feasible. It also introduces the posibility of a Jobs or Zuckerberg type bootstrapping an AI in thier garage.
Anthropic is Silicon Valley start-up currently seeking investors that was spun out of OpenAI by friends of Sam Bankman-Fried.
From this we can infer things about the motives, politics, ethics, and thought processes of the founders/upper management. I think that a heavy dose of skepticism is warranted towards any claims they make, especially when said claim is regarding something they are trying to get you to invest in.
I skimmed the studies you linked and while the first makes the strongest case it is also the weakest version of the claim that an LLM "knows when It’s lying".
That "LLMs' internal states encode information regarding the truthfulness of their outputs, and that this information can be utilized to detect errors." is trivially true but I would argue that the use of the word "truthfulness" here is in error. What the students are actually discussing in this study are the confidence intervals generated as part of the generative/inference process. The analysis and use of CIs to try and reduce hallucinations/error rates is not a novel insight or approach, it is almost as old as machine learning itself.
As such, I took the liberty of looking into the names associated with your 3 studies and managed to positively identify the professional profiles of 10 of them. Of those 10, none appear to hold any patents in the US or EU or have their names associated with any significant projects. Only 3 appear to have done much (if any) work outside of academia at the time the linked study was posted. Of those 3, only 1 stood out to me as having notable experience or technical chops. Accordingly, I am reasonably confident that I know more about this topic than the people writing or reviewing those studies.
There may be a difference between hallucination and imagination in humans. But I assure you that no such difference exists within the context of an LLM. When you examine the raw output of the generative model (IE what the algorithm is generating, not what is presented to the consumer) "hallucination rates" and "creativity" are almost 100% corelated. This is because "Creative Decisions" and "Hallucinations" in a regression model are both essentially deviations from the training corpus and the degree of deviance you're prepared to accept is a key consideration in both the design and evaluation of an ML algorithm.
I encourage anyone who is sincerely interested in this topic to watch this video. The whole thing is excellent, but for those with limited time/attention the specific portion relevant to this thread runs from 8 minutes 23 seconds to just over the 17 minute mark.
"What does solving the hallucination looks like?" is a very good question. A major component of the problem is defining the boundaries of what constitutes "an error" and then what constitutes an acceptable error rate. Only then can you begin to think about whether or not that standard has been met and the problem "solved".
Sumarily the answer to that question is going to look very different depending on the use case. The requirements of the average white-collar office-drone looking to translate a news article, are going to be very different from the requirements of a cyber-security professional at a financial institution, or an industrialist looking to automate portions of thier process.
When I'm giving my intake speach to interns and new hires I talk about "the 9 nines". That is that in order to have a minimally viable product we must meet or exceed the standards of "baseline human performance" with 99.9 999 999% reliability. Imagine a test with a billion questions where one additional incorrect answer means a failing grade.
In this context "Humans also hallucinate" is just not an excuse. Think about how many "visual operations" a person typically performs in the process of going about thier day. Ask yourself how many cars on your comute this afternoon, or words in this comment thread have you halucinated? A dozen? None? I you think you are sure, are you "9 nines" sure?
A lot of the current refinement and itteration work on generative machine learning models revolves around adding layers of checks to catch the most egregious errors (not unlike as with humans as you observed) and giving users the ability to "steer" them down one path or another. While this represents an improvement over the previous generation such solutions are difficult/expensive to scale and actively deleterious to autonomy. The thinking being that "a robot" that requires a full-time babysitter might as well be an employee. This is why you can't buy a self-driving car yet.
Tokenization is cheap compared to actually evaluating the LLM.
Processing tokens is cheap. Generating tokens is expensive.
Evaluating a model can range from relatively cheap to cripplingly expensive depending on the metrics chosen and level of rigor required.
This is wrong! It would have been a reasonable claim to make a few years back, but we know for a fact this isn't true now
Sorry I meant to reply to @yunyun333's comment about "doubling down" but I can assure you that we do not "know that for a fact" and feel the need to caution you against believing everything you read in the marketing materials.
The "hallucination problem" can not realistically be "solved" within the context of regression based generative models as the "hallucinations" are an emergant property of the mechanisms upon which those models function.
A model that doesn't hallucinate doesn't turn your vacation pictures into a Hayo Miyazaki frame either and the latter is where the money and publicity are.
Developers can adjust the degree of hallucination up or down and tack additional models, interfaces, and layers on top to smooth over the worst offenses as Altman and co continue to do, but the fundemental nature of this problem is why many people who are seriously invested in machine learning/autonomy dismiss models like GPT (from which Claude is derived) as an evolutionary dead-end.
This happens because while the model has been programmed by some clever sod to apologize when told that it is wrong, it doesn't actually have a concept of "right" or "wrong". Just tokens with with different correlation scores.
Unless you explicitly tell/programm it to exclude the specific mistake/mistakes that it made from future itterations (a feature typically unavailable in current LLMs without a premium account) it will not only continue to make but "double down" on those mistakes because whatever most correlates with the training data must, by definition, be correct.
It's not a proof, and in order to be "a bluff" there would've had to have been an intent to decieve.
Last week @2rafa asked "When will the AI penny drop? and the answer i would have liked to give at at the time was "when the footprint of a decent tokenizer gets small enough to run organically" or "when the equipment available to the hobbiest and semi-pro comunity catches up with the requirements of a decent tokenizer".
Until that happens specific questions will be doomed to be answered unspecifically.
The broad consensus (which i agree with) within the robotics and machine learning communities is that the existing generative models are ill-suited for any task requiring autonomy or rigor and that this is not a problem that can be fixed by throwing more FLOPs at it.
What is the internet for if not "bullshiting around"?
Not to hand, sorry.
Are they?
More options
Context Copy link