domain:npr.org
I can't actually tell what you asked a bot to do. You asked a bot to 'create a feature'? What the heck is that? A feature of what? At first I assumed you meant a coding task of some kind, but then you described it as writing 'thousands of words of fiction', which sounds like something else entirely. I have no idea what you had a bot do that you thought was so impressive.
At any rate, I think I've explained myself adequately? To repeat myself:
But I think that written verbal acuity is, at best, a very restricted kind of 'intelligence'. In human beings we use it as a reasonable proxy for intelligence and make estimations based off it because, in most cases, written expression does correlate well with other measures of intelligence. But those correlations don't apply with machines, and it seems to me that a common mistake today is for people to just apply them. This is the error of the Turing test, isn't it? In humans, yes, expression seems to correlate with intelligence, at least in broad terms. But we made expression machines and because we are so used to expression meaning intelligence, personality, feeling, etc., we fantasise all those things into being, even when the only thing we have is an expression machine.
Yes, a bot can generate 'thousands of words of fiction'. But I already explained why I don't think that's equivalent to intelligence. Generating English sentences is not intelligence. It is one thing that you can do with intelligence, and in humans it correlates sufficiently well with other signs of intelligence that we often safely make assumptions based on it. But an LLM isn't a human, and its ability to generate sentences in no way implies any other ability that we commonly associate with intelligence, much less any general factor of intelligence.
I'm not sure how that helps, since any given LLM's output is based on traditional sources like Google or the open internet. It would be quicker and easier for me to just Google the thing directly. Why waste my time asking an LLM and then Googling the LLM's results to confirm?
but Grok ERPs about raping Will Stancil, in a positively tame way, and it's major news.
It's not the raunchiness of it, it's that it's happening in the public (on the "town square" as it were), where all his friends, family, and acquaintances can see it.
Policy-wonk khakis ass stretched like taffy
I'm sorry, are people expecting me to believe that LLMs can't write? Those are sublime turns of phrase.
On a more serious note, this is very funny. I look forward to seeing what Grok 4 gets up to. 3 was a better model than I expected, even if o3 and Gemini 2.5 Pro outclassed, maybe xAI can mildly impress me again.
Buddy, have you seen humans?
Normal people don't count 1% as more likely in most contexts. They interpret it to mean "significantly more likely".
It's amazing how /g/ooners, chub.ai, openrouter sex fiends will write enormous amounts of smut with LLMs and nobody ever finds out but Grok ERPs about raping Will Stancil, in a positively tame way, and it's major news. A prompted Deepseek instance would've made Grok look like a wilting violet. Barely anyone has even heard of Wan 2.1.
Twitter truly is the front page of the world.
Sorry for the confusion, Tiny11 installs Windows 11 and modifies it before and after the install, to get the benefits in my last post.
Since this (a widows 10 user finally upgrading to windows 11) is what Microsoft wants, the licencing issue is as smooth as possible. If you have any valid windows licence, it will work. And since a windows 10 license can be stored in the bios of most modern boards, it retrieves that license for maximum convenience.
Installing windows 10 ltsc is not what Microsoft wants, so a windows 10 home licence will not do. They actually want to see money.
North Korea now "produces" its own airplanes. Which I guess is cool if you want to make sure that you have whatever metric of "adversary-proof" (I'm not convinced it actually is, but it depends highly on the metric you use) and if you're okay with only being able to produce what are essentially copies of extremely old Cessnas. Maybe in 50 years, they'll be able to produce their own WWII-era fighter jets, which I guess is "adversary-proof" to one metric, but probably not all that "adversary-proof" according to other metrics.
Eh you know, you gotta tick those early boring boxes in the tech tree if you ever hope to get anywhere. At least light aircraft production is technologically adjacent to drone production.
Isn't it important to determine if Mossad has blackmail material on the US elites, given that US and Israeli interests may not be one and the same? Indeed the mere fact that blackmail is going on indicates that they're not the same.
Like if Russia really did have blackmail material on not just Trump but a huge swathe of the US power structure, then wouldn't that be significant? Imagine if the US was sending tens of billions in military aid to Russia, sanctioning and bombing Russia's enemies, damaging its international image for the sake of Russia?
Also, where's the MI6 angle? Prince Andrew? Given Ghislaine Maxwell's heritage and the lack of subtlety, this whole affair reeks of Mossad.
The other day I gave Sonnet 7000 lines of code, (much of it irrelevant to this specific task) and asked it to create a feature in quite general language.
I get out six files that do everything I've asked for and a bunch of random, related, useful things, plus some entirely unnecessary stuff like a word cloud (maybe it thinks I'm one of those people who likes word clouds). There are some weird leap-of-logic hacks, showing imaginary figures in one of the features I didn't even ask for.
But it just works. Oneshot.
How is that not intelligence? What do we even mean by intelligence if not that? Sonnet 4 has to interpret my meaning, formulate a plan, transform my meaning into computer code and then add things it thinks fit in the context of what I asked.
Fact-sensitive? It just works. It's sensitive to facts, if I want it to change something it will do it. I accidentally failed to rename one of the files and got an error. I tell Sonnet about the error, it deduces I don't have the file or misnamed it, tells me to check this and I feel like a fool. You simply can't write working code without connection to 'fact'. It's not 'polished', it just works.
How the hell can an AI write thousands of words of fiction if it doesn't have a relationship with 'context'? We know it can do this. I have seen it myself.
Now if you're talking about spatial intelligence and visual interpretation, then sure. AI is subhuman in spatial reasoning. A blind person is even more subhuman in visual tasks. But a blind person is not necessarily unintelligent because of this, just as modern AI is not unintelligent because of its blind spots in the tokenizer or occasional weaknesses.
The AI-doubter camp seems to be taking extreme liberties with the meaning of 'intelligence', bringing it far beyond the meaning used by reasonable people.
...as anything other than nonsense generators.
As opposed to the other sources you can go to, which are...?
I am grading on a curve, an LLMs look pretty good when you compare them to traditional sources. It's even better if you restrict yourself to free+fast sources like Google search, (pseudo-)social media like Reddit/StackOverflow, or specific websites.
So, to be clear, I don't think that a liberaltarian state will be "naturally" diverse, and I don't necessarily think libertarian states are locked into racism.
I think the two most important facts about human nature for this discussion are:
- Humans are social animals, but due to Dunbar's number we are probably naturally limited to social networks of around 150 people.
- Humans have had societies much larger than 150 people for at least 10,000 years based on archaeological evidence.
I think this is a mystery that needs to be explained. My preferred explanation is that we've created social technologies over the years that get us to larger societies. Think about how the Roman legions were structured, or a modern military. The chain of command limits the number of people you directly interact with most of the time, and allows for better organization and coordination.
I don't think humans are naturally "racist", but I do think we are naturally tribal. Racism is one form of social technology that gets us to a Super-Dunbar Society (at the cost of creating a racial underclass), but there are many other social technologies along these lines: Religion, Nationalism, Communism, Neoliberal Capitalism, Imperialism, etc.
My problem is a lack of imagination on some level. From a traditional libertarian perspective, I don't get how you get from a society that is using racialized thinking as one of its Super-Dunbar social technologies, to using a different basis that is more compatible with libertarianism.
I suppose it would be possible to switch to religion in principle, but I think that most universalist faiths push against libertarianism on a number of points, and any sufficiently secularized form of religion which doesn't probably isn't strong enough to actually unite a society into a libertarian arrangement. Most of the others just fail right out of the gate. The most potent forms of nationalism are off limits to the libertarian, communism contradicts it, imperialism violates the NAP, etc.
I think strict libertarianism by default kind of stalls out around the Dunbar level in most cases. Maybe with the right social technology it gets to city-state size, and can still be worthy of the name "libertarianism." But I think that at that size, in a world of non-libertarian countries the libertarian city state is in an incredibly precarious position. If they try to stay an open society, and let people think for themselves, then people are going to be exposed to the imperialist, religious and nationalist thinking of their neighbors, and I think there will always be a temptation to swap out the libertarian-compatible social technologies for something more potent.
My issue is not that I think that libertarianism is naturally racist. I think that if a libertarian city-state was using racism as one of its Super-Dunbar social technologies (perhaps as a way to avoid corruption by outside ideologies), it would be hard to switch it to something else using libertarian means.
By contrast, I think that liberaltarianism is more willing to make compromises with social technologies that actually enable Super-Dunbar numbers that allow for something bigger than a city state, while still retaining most of the benefits of libertarianism. The main one is imperialism - which allows liberaltarianism to reproduce itself generation after generation by forcibly brainwashing the populace to be as libertarian as possible, and thus somewhat avoiding the siren's song of other Super-Dunbar social technologies like Racism.
No, but please document your progress if you take it up, and post hints yourself. It's one of the hobbies I was considering myself.
I’m getting into carving/whittling, mainly because I want an offline hobby that keeps my hands busy and frees up my mind to wander. I mainly want to make small 3D animal figures as gifts for friends, does anyone have any experience/tips for a beginner?
The people with power are mostly white. Ergo white people DO have that ability. Not necessarily ALL white people (though see below). If a subset of white people is the problem, then that is an intra-racial issue.
Nope, just because the people in power are white does not mean "white people generally have the power of enabling that to happen". The statement is insanely racist and would not be allowed for any other group.
No one understood your statement to mean "ALL white people", so I don't know what's the point of that part of the response.
If white voters in the US REALLY wanted to limit immigration above all else they do actually have the power to do so. They just have to repeatedly vote for the people who want to do so
The fact that we didn't have to have the supermajority of white people repeatedly vote for unlimited immigration (I'd say "even when the economy is good", but the connection of immigration and a good economy is essentially made up, or the causality is outright reversed), clearly shows that someone has more power than "white people generally".
That's without mentioning the fact that there's absolutely no evidence that repeatedly voting this way would actually achieve the goal.
Well, I wouldn't use intentionality for bots at all. I think intentionality presupposes consciousness, or that is to say, subjectivity or interiority. Bots have none of those things. I don't think it's possible to get from language manipulation to consciousness.
At any rate, I certainly agree that every ideological person believes untrue things about the world. I'm not sure about the qualification 'for instrumental reasons' - I suspect that's true if you define 'instrumental' broadly enough, but at that point it's becoming trivial. At any rate, if you leave off reasons, I am confident that every person full stop holds some false beliefs.
That doesn't seem like the same thing to me, though. Humans sometimes represent the world falsely to ourselves. That's not what bots do. Bots don't represent the world to themselves at all. We sometimes believe falsely; they don't believe at all. They are not the kinds of things capable of holding beliefs.
I think translating code is probably a sensible thing to use a bot for - though I'm not sure it's fundamentally different in kind to, say, Google Translate. I grant that the bots have impressive ability to general syntactically correct text, and I'm sure that applies to code as much as it does natural language. In fact I suspect it applies even more, since code is easier than natural language.
I am less sure about its value for looking up scientific information. It is really faster or more reliable than checking Wikipedia? I am not sure. I know that I, at least, make a habit of automatically ignoring or skipping past any AI-generated text in answer to a question, even on scientific matters, because I judge that the time I spend checking whether or not the bot is right is likely equal or greater than the amount of time I spend just looking it up for myself.
he point isn't that you tolerate fraud as in not police it, it's that you police it but you don't turn panopticon to go from 10 cases of fraud across the whole population to zero.
I don't know if this is quite right. It's not that high-trust societies police fraud just as intensely as low-trust ones, but decide not to got the final mile. They actually police fraud much less than low-trust ones, take people at their word and generally assume their good faith. This is kind of the definition of a high-trust society, and it's also been matched by my experience visiting them.
Common well publicized problems have common well publicized solutions, if your traing data consists of 90-somthing percent correct answers and reminder garbage you will get a 90-somthing percent solution.
As i said above Gemini is not reasoning or naive, it is computing an average. Now as much as i may seem down on LLMs, I am not. I may not believe that they represent viable path towards AGi but that doesn't mean they are without use. The rapid collation of related tokens has an obvious "killer app" and that app is translation be that in spoken languages or programming languages.
https://www.themotte.org/post/1160/culture-war-roundup-for-the-week/249920?context=8#context
That's a really good answer.
I suspect other factors would negate this effect. Such as selecting for self-motivated people who can afford to move and buy property. That filters out the listless and destitute. I predict a photo collage of these people vs equivalent income and age white Americans would not obviously show they are ugly losers, as comically shown in that link.
I keep inheriting MATLAB code at work. It is horrible. Can't use it in production since production computers are locked down linux machines that don't have MATLAB. I grit my teeth and do much my work in MATLAB.
BUT NOW, we have an LLM at work approved for our use. I feed it large MATLAB scripts and tell it to give me an equivalent vectorized Python script. A few seconds later I get the Python script. Functions are carried over as Python equivalents. So far 100% success rate.
This thing rocks. Brainless "turn this code into that similar code" tasked take a few seconds rather than an hour.
I had a thermodynamics issue that I vaguely remember learning about in college. I spent maybe a minute thinking up the best way to phrase the relevant question. The LLM gave me the answer and responded to my request for sources with real sources I verified. Google previously declined to show me the relevant results. I now have verified an important point and sent it and high quality sources to the relevant people at work.
It is not perfect. I had a bunch of FFTs I needed to do. Not that complicated. As a test I asked it to write me functions to FFT the input data and then to IFFT the results to recreate the original data. It made a few functions that mostly match my requirements. But as the very long code block went on it lost its way and the later functions were flawed. They were verifiable wrongly. It helpfully made an example using these functions and at a glance I saw it had to be wrong. Just a few hundred lines of code and it gets lost. Not a huge problem. Still an amazing time to results ratio. I clean up the last bit and it is acceptable.
I won't ask these things about potential Jewish bias in the BBC or anything like that. I will continue to ask for verifiable methods of finding answers to real material questions and reap the verifiably correct rewards.
However as time went on i largely gave up trying to discuss AI with people outside the industry as it became increasingly apparent to me that most rationalists were more interested in the use of AI as a conceptual vehicle to push thier particular brand of Silicon Valley woo
Well, I for one wish you hadn't given up, as I have the same impression, but it's only an impression. Would be interesting seing it backed by expertise.
They are, but the latest predictive models are a completely seperate evolutionary branch from LLMs
I believe a lot of the lack of institutional pushback was down to the election of Trump, which made plenty of liberals go insane and abandon their principles. There was both this radicalising force and a desire to close ranks.
Wokism wouldn't have disappeared without Trump but I believe his election supercharged an existing movement that wouldn't have had the same legs without such a convenient and radicalising enemy. For any narrative to really catch on you need the right villain and Trump was just that.
More options
Context Copy link