I doubt it. School quality being better than other states' doesn't imply producing better educated people than other states, it implies producing a better delta in educated status compared to other states, controlled for the children's potential ceiling, and my guess is that both the floor and the ceiling for children in Mississippi are lower than for most other states. It is also but one of many, MANY dimensions by which parents measure their likelihood of moving to the state, and my guess is that Mississippi has a lot of negatives in other very important dimensions. Furthermore, even if those weren't true, this is the kind of thing that would take at least a decade to see confirmation on any meaningful differences in output, which means even more time before people start moving in meaningful numbers, and that gives plenty of time for people in other states to find and come up with excuses for why the differences in output, as measured by the education level of public HS graduates, isn't due to Mississippi's specific methods of educating.
The phenomenon that was written about it Coming Apart and its consequences...
As someone who was educated in semi-elite schools for most of my childhood/college, I recall the real kick in the teeth I felt in my 20s when I learned, through experience, that people who actually obeyed rules and put in honest effort into improving oneself was a rarity, rather than semi-common (still likely a minority or barely a majority in the schools I attended). People who grew up in even more elite institutions and then stayed only in elite institutions professionally, surrounded primarily by other people with similar experiences, just don't seem to have the capacity to understand just how dysfunctional vast swathes of society are, and how much of keeping society running is making sure their dysfunction doesn't cause too much damage. It seems like just another case of the apex fallacy, which seems endemic in the culture wars, including gender relations, race relations, and immigration.
Now, one possible point of hope there is that it's easier than ever before to see direct evidence of the actual lives of the actual people with whom one doesn't share an environment. I've seen people reference this with respect to the popularization of bodycam footage since they became near-ubiquitous among police forces post-Floyd. However, people - including myself - had foolish, naive, stupid, idiotic ideas about the proliferation of social media bringing people of different ideas and principles together, when, AFAICT, it has done the exact opposite. And generative AI adds a new wrinkle as well. After all, you can bring a horse to the water, but you can't make it drink. So I'm pretty pessimistic.
I don't think those examples actually touch on the point of difficulty, which is convincing every state to copy Mississippi in terms of whatever it is they did that caused improvements.
latency is not ~ever picoseconds to start with - a clock cycle is 1/4GHz = 1/4 nanosecond = 250 picoseconds, and nothing is faster than that.
So far. I suppose we'll hit physical limitations in terms of the length of the circuitry divided by C, and I don't know how the math would work out, but considering we're talking about future tech, it seems unwarranted to talk about the limitations of current tech. If we get this down to femtoseconds, even a 1000x slowdown is measured in picoseconds.
Your comment makes me think now, of when, if ever, will androids gain whatever human qualities are required in order to be capable of raping a human? This seems like a potentially important threshold to cross in the realm of sex bots, given how common rape fantasies are among humans.
All seems reasonable, but if we reach the point where latency going up from picoseconds on regular OS to nanoseconds on LLM OS, it seems to me that it won't be enough to be meaningful on a regular consumer-level device. Even high level gamers generally measure lag in milliseconds, which is many orders of magnitude longer, and I don't think human perception will get that much faster.
Then again, with transhumanism being very possible in our future, perhaps even a single picosecond extra latency will prove completely unacceptable for consumer-level tools.
In terms of speed, I expect that, at some point in our future, we'll have microchips cheap enough for regular consumers to buy by the dozen from China that each make the entirety of Anthropic's current data centers look like a basic calculator in comparison. When it's basically trivial for an entry-level PC to run the equivalent of 100 Mythoses at 100x the speed that we can today, I feel like it won't add enough overhead to the user experience to be noticeable.
In terms of security, that's likely a tougher nut to crack, but I'm an optimist when it comes to how good multiple LLMs checking each other will be.
3.8% rate translates to about 0.3% increase in a month, which is small enough that I haven't noticed anything in particular. However, I've certainly taken notice of the gasoline prices going up to $4+/gallon (I think I saw $6+/gallon for diesel). I'm in a fortunate enough situation that I barely drive and my public transportation costs are subsidized by my workplace, so this hasn't affected my life directly, but I can only imagine how much gig economy workers are suffering, along with everyone who actually commutes via driving their own vehicles.
Think of an office with 10 human employees working in, say, payroll, constantly sending each other emails, messages, having meetings, calling and speaking to each other and other people, summarizing documents, liaising with other departments, asking AI question about how to use various accounting tools, or about the company’s employee benefits package. Now say this department is automated. An AI model acts as an agent to use an already-existing software package to do all the payroll work. No emails, calls or meetings - or at least far fewer. The total inference work required goes down.
It's not obvious to me that it follows that the total inference work required goes down, either necessarily or most likely. The inference needs for emails, calls, meetings, etc. certainly would go down, but the LLM agent(s) will still need to use inference for chain-of-thought and planning to substitute whatever actual work the humans were doing, and those inference needs may very well be greater than the communications and informing-humans inference that got obviated.
This is before getting into how human demand for useful stuff just seems to keep expanding as capacity to supply them expands. E.g. one pretty obvious thought I had was about LLM-based operating systems to replace Windows and Linux and iOS in the future, which won't need any software specifically written for it - just write any software in any language, including made-up language or pseudo-code, and the LLM would just "compile" that to the 1s and 0s required for whatever CPU to interpret to accomplish the logic of that code (this might last for a hot minute until it needs just some general list of specs - which might last a hot minute until it needs just to read your brain activity via electrodes, to infer what sort of software would make you happy in the moment - which might last a hot minute until it needs just to look at your facial expressions to infer the same thing). Surely a world in which every phone and home computer ran an OS like that is one that would require orders of magnitude more inference costs than today.
Its frankly hilarious to me that the Dems are basically stuck with the coalition they built and all its dysfunction because they elevated the AOCs, Kamala Harris', Jasmine Crocketts and Stacey Abrams amongst them, and the most motivated and active parts of their base are all-in on identity politics, so trying to wrest back the controls will require exercises of raw, naked power that is just as likely to bite them in the ass as it is to select a viable candidate.
There's certainly quite a bit of dark humor to be found in the fact that the DEI party is willing to hurt itself as a way to provide a costly signal that it really does believe in DEI. I just can't help but keep thinking of the trolley problem meme "You can stop the trolley at any time, but in doing so you need to admit that you made a mistake."
Incomprehensible in the sense that you wouldn't be able to figure it out by looking at the computer's code and memory?
Almost definitely yes.
Incomprehensible that it couldn't be described in an abstract way?
No, it could be described in an abstract way, but the description could be so complicated and non-intuitive to humans that no single human or even team of humans could be said to have a meaningful understanding of the model other than in very broad terms. The model only needs to be comprehensible to the LLM, and only to the extent that it allows it to produce decisions and actions that are sufficiently intelligent.
Especially now that language is the first skill to be obliterated by AI.
I mean, shape rotation was obliterated a long time ago by non-ai computation, and it looks like actual math innovation (ie theorems, not calculations) are pretty close to the front of the line on the chopping block right now.
My theory is that AI will essentially "solve" the needs for mathematical intelligence in humans, leading to human competency being determined by how well they can manipulate each other with words. Like how lowering material limitations has led to social skills and "superficial" characteristics becoming far more highly valued in relationships of all sorts. I find this depressing, but I think the future is bright for people like you or Scott Alexander.
Yes, but it depends on what you mean by "sophisticated" models. Certainly the models would have to be complex, detailed, accurate, and precise, at a level similar or equivalent to the models we humans use in our heads. But the models would likely be utterly incomprehensible to us humans.
"Professor Zalzabraz the Neptunian said: 'water is made of one hydrogen and two oxygen atoms.'"
"Then, professor Qartherage the Uranusian responded, 'You fool. You absolute buffoon. You got that backwards. Water is two hydrogen atoms and one oxygen atom.'"
They'd rather make $20 every two months on a battlepass for the next two years than $60 up front, but if they could charge you $60 and then use the battlepass model as well that's even better.
Subscriptions surely are nice and low-volatility when it works, but it seems like a case of "10% of the time, it works every time." Which is necessarily high volatility.
For Bloodborne, I was under the impression that Sony owned the rights and could just tell Miyazaki to pound sand? Are there weird rights issues surrounding the exclusive deal they made back then?
I'm not sure how it could ever die, without some sort of transhuman future (which, to be fair, seems more than unlikely). Men, like most people, follow their incentives, and women provide a huge incentive for men both to take responsibility for their weaknesses and to bully and shame other men who act weak and entitled. This effect by women only seems to be increasing as women are freed more and more to provide whatever incentive they want to men, without the limitations of social censure or material need.
My friend always found kumquat to be an inherently funny word, and I think I agree with him.
Wordcel literally stands for "word celibate." I do think, the way it's used, it's just someone who is good with words, but it's hard to escape the connotation with the root it comes from, of "incel," which is intrinsically insulting to the person.
But for the artist who wants to be innovative in terms of technique, what is there to paint or draw? You can draw something new, be the guy who does portraits of SpaceX rockets or NVIDIA GPUs or something and maybe solves some minor challenge of framing or perspective involved. Kind of a niche, and limited demand. Or you can experiment with technique in a way that violates the classical laws of beauty, perspective, framing, etc that are ‘solved’.
How about coming up with new techniques to do the classical beauty more efficiently, more quickly? Perhaps by learning linear algebra and doing it a lot really really fast. I wonder if there's an alternate universe where generative AI isn't called AI and was developed by artists trying to come up with new and innovative ways to make themselves stand out.
With unique dishes I don't even know what the test is supposed to look like as there will not be an obvious control dish to serve alongside it.
It certainly would be a fun scientific project to come up with ways to do this in a way that allows us to draw actually meaningful conclusions! With unique dishes, the control could just be some dish put together by an amateur based on the same ingredients (or at least the inferred ingredients). Or perhaps an amateur putting together a dish with the goal of making it taste and feel similarly to the actual dish. We'd have to actually run a study on the studies, running these studies multiple times in multiple ways, to determine which control is actually more effective as a control for the purposes of judging the unique dish.
I believe that you understand me correctly. It's like how a PS3 can emulate a NES, but the underlying circuitry of the PS3 is actually very different from that of an NES. And in the future, if scaling up LLMs and making them faster were able to create something that was truly indistinguishable from a human in terms of its chain of thought and speech and perhaps even actions of an attached android (none of this is guaranteed to happen, of course), this wouldn't indicate that the LLM was human-emulating intelligence. Rather, it's an intelligence that is emulating how a human thinks and behaves, but the underlying intelligence that allows it to emulate human thought would still be that of an LLM, which, as far as we can tell right now, isn't the result of emulating humans.
The analogy with taking medicine feels so insane. The vast majority of people taking medicine are not doing so for pleasure, they are doing so for purely functional reasons.
Thing is, the pleasure someone gets from a dining experience is also a functional reason, one that could theoretically be isolated and measured, like the effectiveness of medicine. The problem with something like the atmosphere or vibe of a restaurant is that you really can't double or even single blind yourself against that, much like how a movie critic can't watch a film without knowing what it is. So, like film reviews, we'd have to just kinda accept the critic's word for the quality of the atmosphere of the restaurant being reflective of the actual quality, rather than the biases of the critic that could have been shaped by the restaurant bootstrapping its reputation via good marketing or whatnot.
But a restaurant critic certainly could do a double blinded taste test to judge how good the food is, and include it as a component of his review. Which would actually provide meaningful information to a reader who might not share that critic's biases than the critic's report about the taste based on his experience of eating at the restaurant.
Bioshock is old enough to buy porn in the USA. And the franchise has largely been dormant since Bioshock Infinite which is old enough to go into high school if it skipped a grade. Times have changed.
If so we have wildly different definitions. I would say your definition is very very broad, something like logistic regression or a kalman filter would have an internal model.
I would agree with this. I see no reason why a "model" would have to have any features more than that to qualify as a model. Of course, having features that are more than that can make it a better model or one that's more useful in certain contexts - in fact, having features more than that are required for being sufficiently useful in most contexts that are worth discussing. But that's a question of degree - if the model allows sufficiently accurate and precise predictions about what it's modeling, then it could be useful for the purposes of someone who wants to generate counterfactuals, simulation, planning, or action prediction.
Which is why your ball throwing example confuses me, under my definition, yes kids clearly have an internal model for catching a baseball but it is very controversial/not settled that an LLM has an internal model for chess. I think saying kids can catch a baseball using essentially a compressed predictive statistics process is cognitively incorrect.
The actual mechanism of the model that the kid is using doesn't matter. Again, the kid could be a robot, and we'd still know that it had a model of physics. The model of physics might be as simple as "push ball in direction X -> ball moves towards direction X" but that doesn't make it not a model - just a really really simple model. One that is wrong, much like every model, and one that is useful enough for the purposes of throwing a ball towards direction X.
- Prev
- Next

This discussion makes me often think about Forrest Gump, where the titular character's mother is presented as being heroic for prostituting herself to the school superintendent in exchange for allowing Forrest, despite being officially tested as having something like 75 IQ, to attend classes with everyone else, because "he deserves the same education as any other kid" or something like that. The film also, of course, featured the same kid, who needed braces to walk, just one day suddenly becoming capable of running, not only like any other kid, but to a level enough to make him the star running back to what seemed like a high level college football team, despite having zero other football skills.
I'm always highly skeptical of the whole "we must manipulate fiction because fiction inevitably, implicitly, unconsciously manipulates people's beliefs about reality" crowd, but I think there may be a grain of truth in their claims.
More options
Context Copy link