@07mk's banner p

07mk


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 15:35:57 UTC
Verified Email

				

User ID: 868

07mk


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 15:35:57 UTC

					

No bio...


					

User ID: 868

Verified Email

latency is not ~ever picoseconds to start with - a clock cycle is 1/4GHz = 1/4 nanosecond = 250 picoseconds, and nothing is faster than that.

So far. I suppose we'll hit physical limitations in terms of the length of the circuitry divided by C, and I don't know how the math would work out, but considering we're talking about future tech, it seems unwarranted to talk about the limitations of current tech. If we get this down to femtoseconds, even a 1000x slowdown is measured in picoseconds.

Your comment makes me think now, of when, if ever, will androids gain whatever human qualities are required in order to be capable of raping a human? This seems like a potentially important threshold to cross in the realm of sex bots, given how common rape fantasies are among humans.

All seems reasonable, but if we reach the point where latency going up from picoseconds on regular OS to nanoseconds on LLM OS, it seems to me that it won't be enough to be meaningful on a regular consumer-level device. Even high level gamers generally measure lag in milliseconds, which is many orders of magnitude longer, and I don't think human perception will get that much faster.

Then again, with transhumanism being very possible in our future, perhaps even a single picosecond extra latency will prove completely unacceptable for consumer-level tools.

In terms of speed, I expect that, at some point in our future, we'll have microchips cheap enough for regular consumers to buy by the dozen from China that each make the entirety of Anthropic's current data centers look like a basic calculator in comparison. When it's basically trivial for an entry-level PC to run the equivalent of 100 Mythoses at 100x the speed that we can today, I feel like it won't add enough overhead to the user experience to be noticeable.

In terms of security, that's likely a tougher nut to crack, but I'm an optimist when it comes to how good multiple LLMs checking each other will be.

3.8% rate translates to about 0.3% increase in a month, which is small enough that I haven't noticed anything in particular. However, I've certainly taken notice of the gasoline prices going up to $4+/gallon (I think I saw $6+/gallon for diesel). I'm in a fortunate enough situation that I barely drive and my public transportation costs are subsidized by my workplace, so this hasn't affected my life directly, but I can only imagine how much gig economy workers are suffering, along with everyone who actually commutes via driving their own vehicles.

Think of an office with 10 human employees working in, say, payroll, constantly sending each other emails, messages, having meetings, calling and speaking to each other and other people, summarizing documents, liaising with other departments, asking AI question about how to use various accounting tools, or about the company’s employee benefits package. Now say this department is automated. An AI model acts as an agent to use an already-existing software package to do all the payroll work. No emails, calls or meetings - or at least far fewer. The total inference work required goes down.

It's not obvious to me that it follows that the total inference work required goes down, either necessarily or most likely. The inference needs for emails, calls, meetings, etc. certainly would go down, but the LLM agent(s) will still need to use inference for chain-of-thought and planning to substitute whatever actual work the humans were doing, and those inference needs may very well be greater than the communications and informing-humans inference that got obviated.

This is before getting into how human demand for useful stuff just seems to keep expanding as capacity to supply them expands. E.g. one pretty obvious thought I had was about LLM-based operating systems to replace Windows and Linux and iOS in the future, which won't need any software specifically written for it - just write any software in any language, including made-up language or pseudo-code, and the LLM would just "compile" that to the 1s and 0s required for whatever CPU to interpret to accomplish the logic of that code (this might last for a hot minute until it needs just some general list of specs - which might last a hot minute until it needs just to read your brain activity via electrodes, to infer what sort of software would make you happy in the moment - which might last a hot minute until it needs just to look at your facial expressions to infer the same thing). Surely a world in which every phone and home computer ran an OS like that is one that would require orders of magnitude more inference costs than today.

Its frankly hilarious to me that the Dems are basically stuck with the coalition they built and all its dysfunction because they elevated the AOCs, Kamala Harris', Jasmine Crocketts and Stacey Abrams amongst them, and the most motivated and active parts of their base are all-in on identity politics, so trying to wrest back the controls will require exercises of raw, naked power that is just as likely to bite them in the ass as it is to select a viable candidate.

There's certainly quite a bit of dark humor to be found in the fact that the DEI party is willing to hurt itself as a way to provide a costly signal that it really does believe in DEI. I just can't help but keep thinking of the trolley problem meme "You can stop the trolley at any time, but in doing so you need to admit that you made a mistake."

Incomprehensible in the sense that you wouldn't be able to figure it out by looking at the computer's code and memory?

Almost definitely yes.

Incomprehensible that it couldn't be described in an abstract way?

No, it could be described in an abstract way, but the description could be so complicated and non-intuitive to humans that no single human or even team of humans could be said to have a meaningful understanding of the model other than in very broad terms. The model only needs to be comprehensible to the LLM, and only to the extent that it allows it to produce decisions and actions that are sufficiently intelligent.

Especially now that language is the first skill to be obliterated by AI.

I mean, shape rotation was obliterated a long time ago by non-ai computation, and it looks like actual math innovation (ie theorems, not calculations) are pretty close to the front of the line on the chopping block right now.

My theory is that AI will essentially "solve" the needs for mathematical intelligence in humans, leading to human competency being determined by how well they can manipulate each other with words. Like how lowering material limitations has led to social skills and "superficial" characteristics becoming far more highly valued in relationships of all sorts. I find this depressing, but I think the future is bright for people like you or Scott Alexander.

Yes, but it depends on what you mean by "sophisticated" models. Certainly the models would have to be complex, detailed, accurate, and precise, at a level similar or equivalent to the models we humans use in our heads. But the models would likely be utterly incomprehensible to us humans.

"Professor Zalzabraz the Neptunian said: 'water is made of one hydrogen and two oxygen atoms.'"

"Then, professor Qartherage the Uranusian responded, 'You fool. You absolute buffoon. You got that backwards. Water is two hydrogen atoms and one oxygen atom.'"

They'd rather make $20 every two months on a battlepass for the next two years than $60 up front, but if they could charge you $60 and then use the battlepass model as well that's even better.

Subscriptions surely are nice and low-volatility when it works, but it seems like a case of "10% of the time, it works every time." Which is necessarily high volatility.

For Bloodborne, I was under the impression that Sony owned the rights and could just tell Miyazaki to pound sand? Are there weird rights issues surrounding the exclusive deal they made back then?

I'm not sure how it could ever die, without some sort of transhuman future (which, to be fair, seems more than unlikely). Men, like most people, follow their incentives, and women provide a huge incentive for men both to take responsibility for their weaknesses and to bully and shame other men who act weak and entitled. This effect by women only seems to be increasing as women are freed more and more to provide whatever incentive they want to men, without the limitations of social censure or material need.

My friend always found kumquat to be an inherently funny word, and I think I agree with him.

Wordcel literally stands for "word celibate." I do think, the way it's used, it's just someone who is good with words, but it's hard to escape the connotation with the root it comes from, of "incel," which is intrinsically insulting to the person.

But for the artist who wants to be innovative in terms of technique, what is there to paint or draw? You can draw something new, be the guy who does portraits of SpaceX rockets or NVIDIA GPUs or something and maybe solves some minor challenge of framing or perspective involved. Kind of a niche, and limited demand. Or you can experiment with technique in a way that violates the classical laws of beauty, perspective, framing, etc that are ‘solved’.

How about coming up with new techniques to do the classical beauty more efficiently, more quickly? Perhaps by learning linear algebra and doing it a lot really really fast. I wonder if there's an alternate universe where generative AI isn't called AI and was developed by artists trying to come up with new and innovative ways to make themselves stand out.

With unique dishes I don't even know what the test is supposed to look like as there will not be an obvious control dish to serve alongside it.

It certainly would be a fun scientific project to come up with ways to do this in a way that allows us to draw actually meaningful conclusions! With unique dishes, the control could just be some dish put together by an amateur based on the same ingredients (or at least the inferred ingredients). Or perhaps an amateur putting together a dish with the goal of making it taste and feel similarly to the actual dish. We'd have to actually run a study on the studies, running these studies multiple times in multiple ways, to determine which control is actually more effective as a control for the purposes of judging the unique dish.

I believe that you understand me correctly. It's like how a PS3 can emulate a NES, but the underlying circuitry of the PS3 is actually very different from that of an NES. And in the future, if scaling up LLMs and making them faster were able to create something that was truly indistinguishable from a human in terms of its chain of thought and speech and perhaps even actions of an attached android (none of this is guaranteed to happen, of course), this wouldn't indicate that the LLM was human-emulating intelligence. Rather, it's an intelligence that is emulating how a human thinks and behaves, but the underlying intelligence that allows it to emulate human thought would still be that of an LLM, which, as far as we can tell right now, isn't the result of emulating humans.

The analogy with taking medicine feels so insane. The vast majority of people taking medicine are not doing so for pleasure, they are doing so for purely functional reasons.

Thing is, the pleasure someone gets from a dining experience is also a functional reason, one that could theoretically be isolated and measured, like the effectiveness of medicine. The problem with something like the atmosphere or vibe of a restaurant is that you really can't double or even single blind yourself against that, much like how a movie critic can't watch a film without knowing what it is. So, like film reviews, we'd have to just kinda accept the critic's word for the quality of the atmosphere of the restaurant being reflective of the actual quality, rather than the biases of the critic that could have been shaped by the restaurant bootstrapping its reputation via good marketing or whatnot.

But a restaurant critic certainly could do a double blinded taste test to judge how good the food is, and include it as a component of his review. Which would actually provide meaningful information to a reader who might not share that critic's biases than the critic's report about the taste based on his experience of eating at the restaurant.

Bioshock is old enough to buy porn in the USA. And the franchise has largely been dormant since Bioshock Infinite which is old enough to go into high school if it skipped a grade. Times have changed.

If so we have wildly different definitions. I would say your definition is very very broad, something like logistic regression or a kalman filter would have an internal model.

I would agree with this. I see no reason why a "model" would have to have any features more than that to qualify as a model. Of course, having features that are more than that can make it a better model or one that's more useful in certain contexts - in fact, having features more than that are required for being sufficiently useful in most contexts that are worth discussing. But that's a question of degree - if the model allows sufficiently accurate and precise predictions about what it's modeling, then it could be useful for the purposes of someone who wants to generate counterfactuals, simulation, planning, or action prediction.

Which is why your ball throwing example confuses me, under my definition, yes kids clearly have an internal model for catching a baseball but it is very controversial/not settled that an LLM has an internal model for chess. I think saying kids can catch a baseball using essentially a compressed predictive statistics process is cognitively incorrect.

The actual mechanism of the model that the kid is using doesn't matter. Again, the kid could be a robot, and we'd still know that it had a model of physics. The model of physics might be as simple as "push ball in direction X -> ball moves towards direction X" but that doesn't make it not a model - just a really really simple model. One that is wrong, much like every model, and one that is useful enough for the purposes of throwing a ball towards direction X.

It seems to me that there's not much difference. Part of an intelligence test given to a (putative) human-level intelligence would be to ask it to perform human emulation.

Passing that test wouldn't indicate human-emulating intelligence, and human-level intelligences exist that would fail that test, though, because emulating humans is something that humans don't really do via their human-level intelligence; they behave as humans, which appears like emulating humans (but isn't emulating humans). Furthermore, even if some alien intelligence were able to emulate humans, that wouldn't actually give us any insight into how it uses its alien intelligence to produce behavior (or even thought) that emulates humans. We see a small version of this right now with LLMs in "Chain of Thought mode" where we instruct LLMs to work out thoughts in logical sequence similarly to how a human might think it out. There's no way of knowing if the conclusion that the LLM reached following some chain of thought was actually due to that chain of thought, or if it was some separate process that produced both that chain of thought and the conclusion.

E.g. if you present some question like, "In this universe, all dogs are blue. Jim is a dog. What color is Jim?" A human might think, "Logically, because all dogs are blue and Jim is a dog, it follows that Jim has all the characteristics that a dog must have. All dogs must be blue. Therefore Jim is blue!" An LLM with COT might produce that exact same text and conclude "Jim is blue," but we have no insight into the actual "thinking" that the LLM followed to conclude this. A human might model this universe as one in which all dogs are blue, and Jim is an individual dog, which must be blue, therefore Jim is blue. We have no way of knowing what model the LLM has of this universe, other than the fact that it produces the text "Jim is blue" as an answer to that question.

An intelligence being able to emulate a human, in no way, indicates that the intelligence is formed in a human-like way. Though it certainly proves that the intelligence is at least human-level (since it can always just emulate humans to reach human-level).

An example of what you are proposing as evidence: we have an indestructible radio, you can’t open it. It does radio things. You are proposing that empirically since a voice comes out of this radio then it must have a tiny man inside of it. There is no other “evidence”. And the proof? Well its empirically observable, what do you mean there is no tiny man inside the box??

This has no relationship to what I wrote, as far as I can tell. Could you explain the connection?

I would say, logically, because a voice comes out of this radio, then it must have some ability to vibrate air. And depending on the nature of the voice and the words, I could draw some conclusions - e.g. if it reported on news that happened after the radio entered my presence, that it must have some ability to take in information from faraway. I'm not sure where you're getting this idea that my logic would conclude that there's a tiny man. Again, I can't figure out how that has any relationship at all with what I wrote, and I'm curious what your explanation of that relationship is.

EDIT:

EDIT: What are you actually using as a definition of internal model? It is imprecise in casual conversation but very specific in technical ones.

Something that is simpler than the actual thing being modeled, but which can be used to help make predictions about the actual thing.

Human-emulating would be reaching conclusions and decisions through a process that is similar to how humans do in some way. The most obvious way would be that it follows some sequence of "thoughts" that a typical human could look at and honestly think, "That's similar to how I might think through this." Another way might be if it literally emulates our entire brain, possibly down to the sub-atomic particles.

Human-level would simply be if it's able to pass intelligence tests (any that you could come up with, including IQ, but also things that might involve social awareness or performance in physical tests) at a rate similar to humans. How it accomplishes this wouldn't matter; perhaps tomorrow we discover that God is real and He can be communicated with via a new antenna we developed. Then we put that antenna on a computer and tell it to ask God what to do, in order to behave as intelligently as a human, and God in His great benevolence, decides to answer accurately. That computer would have human-level intelligence, but certainly not human-emulating.

Yeah thats not proof, thats a theory. There are other theories about what the LLM is doing and they are just as explanatory as yours is. You have run no experiments to isolate those alternatives and test whether or not they exist. You have run no ablation studies and no studies to attempt to isolate co-occurring variables. It is by definition a theory. Hence why I asked for proof, because I am certain you have none.

I'm not sure where the disconnect is here. This is a simple logical question, not an empirical one. There's no need to check for some physical representation of a model, because making accurate predictions requires an implicit model, QED. I've yet to see anyone suggest an alternative - certainly not in this thread - and I'm not sure what alternative explanation could exist, logically.

We can prove the child has an internal model of physics because we have 7 billion humans, including our own selves, we can extrapolate our internal abilities across a generalized set of all humans.

But we don't need 7 billion humans or even any other human than that child to conclude this. If we landed on an alien planet and observed an alien doing this, we would also know that it had an internal model of physics. If someone made a robot that could do this, we would know it as well.