@07mk's banner p

07mk


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 15:35:57 UTC
Verified Email

				

User ID: 868

07mk


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 15:35:57 UTC

					

No bio...


					

User ID: 868

Verified Email

Hopefully this tech will become portable soon enough that anyone can just take out their smartphones and pop in their earbuds to get around the issue. https://x.com/shweta_ai/status/1912536464333893947?t=45an09jJZmFgYosbqbajog&s=19

It was a joke.

I see, it must have gone over my head, but that's not an unusual experience for me with jokes, unfortunately. So is it that you were just being ironic, and that your meaning was the opposite, that the mysticism around art being imbued with a part of the artist's soul is still quite common in artists' circles, with a part of Duchamp's soul being in that toilet just as much as, e.g. part of Van Gogh's soul being in his self portrait?

I've heard that it was actually by and for men who love overweight women who wanted both to encourage more overweight women and a group where it's easy to meet a lot of overweight women, though I haven't checked deeply for the veracity. Perhaps it's been fully coopted by overweight women by now, though, regardless of the origins.

It does directly address the issue and has nothing to do with hypocrisy, though. The issue being raised is that LLMs are fundamentally unreliable due to being unfixably prone to hallucinations. The way it's addressed is that humans are also similarly fundamentally unreliable, yet we've built reliable systems based on them, and that proves by example that being fundamentally unreliable isn't an insurmountable hurdle for generating reliable systems.

I don't understand how this doesn't address the issue in the most direct, straightforward way possible while completely avoiding anything to do with accusations of hypocrisy. The only way it could be better is if someone actually provided the specific method of generating reliable systems using modern LLMs.

I don't buy your appeal to normal people here. I think that most normal people do not think that chatbots are intelligent.

It's hard to say what "normal people" think about this (or even what "normal people" are), but in my experience, people I would consider in that category use the label "AI chatbots" to describe things like ChatGPT or Copilot or Deepseek, while also being aware that "AI" is short for "artificial intelligence." This seems fundamentally incompatible with believing that these things aren't "intelligent."

Now, almost every one of these "normal people" I've encountered also believe that these "AI chatbots" lack free will, sentience, consciousness, internal monologue, and often even logical reasoning abilities. "Stochastic parrots" or "autocomplete on steroids" are phrases I've seen used by the more knowledgeable among such people. But given that they're still willing to call these chatbots "AI," I think this indicates that they consider "intelligence" to mean something that doesn't require such things.

I played 16 recently based on enjoying the demo a ton, especially the real-time combat. Unfortunately, the full game didn't add a whole lot of depth to the combat, and the story ended up being a major disappointment, going a very well-trodden boring route after the demo appeared to set things up for a really intriguing medieval politics kind of plot. Really sad that what could've been a very bold step into a new direction for the franchise ended up being so half-assed.

I never finished 13, but I still think it has the best combat system out of any FF game I've played, including 7 Remake. Almost as a rule, I have a great distaste for turn-based combat systems, but I found the whole Paradigm Shift system of changing party members' roles in real-time during a battle and spending 90% of the time doing tiny damage to stagger the enemy so that you can deplete 90% of their HP in that 10% stagger window to be highly engaging. A shame about the storytelling, worldbuilding, and hyperlinear levels for the first 20 hours of the game.

I'll always have a soft spot for 8 for being the 1st JRPG I played and blowing me away with its huge explorable world and cinematic cutscenes. Even if the Junction system turned out to be pretty bad, and the story went off the rails near the end. The space base scene and Laguna's love story will always tug my heart strings.

Now, maybe computers will be able to overcome those problems with simple coding. But maybe they won't.

Right, we don't know if a superintelligence would be capable of doing that. That's the problem.

Sure. But it's much better (and less uncertain) to be dealing with something whose goals you control than something whose goals you do not.

Right, but we don't know how much better and how much less uncertain, and whether those will be within reasonable bounds, such as "not killing everyone." That's the problem.

But on the flip side, a cat can predict that a human will wake up when given the right stimulus, a dog can track a human for miles, sometimes despite whatever obstacles the human might attempt to put in its way. Being able to correctly predict what a more intelligent being would do is quite possible.

I didn't intend to imply that a less intelligent being could never predict the behavior of a more intelligent being in every context, and if my words came off that way, I apologize for my poor writing.

This is what I mean by "almost by definition." If you could reliably predict the behavior of something more intelligent than you, then you would simply behave in that way and be more intelligent than yourself, which is obviously impossible.

I don't think this is true, on a couple of points. Look, people constantly do things they know are stupid. So it's quite possible to know what a smarter person would do and not do it.

I don't think is true. I think people might know what a more mature or wise or virtuous person would do and not do it, but I don't think they actually have insight into what a more intelligent person would do, particularly in the context of greater intelligence leading to better decision making.

But secondly, part of education is being able to learn and imitate (which is, essentially, prediction) what wiser people do, and this does make you more intelligent.

I think that's more expertise than intelligence. Not always easy to disentangle, though. In the context of superintelligence, this just isn't relevant, because the entire point of creating a superintelligent AI is that it's able to apply intelligence in a way that is otherwise impossible. Which is going to have to do with complex decision making or analyzing complex situations to come to conclusions that humans couldn't do by themselves. If we had the capacity to independently predict the decisions a superintelligent AI would do, we wouldn't be using the superintelligent AI in the first place.

But one of the things we do to keep human behavior predictable is retain the ability to deploy coercive means. I suppose in one sense I am suggesting that we think of alignment more broadly. I think that taking relatively straightforward steps to increase the amount of uncertainty an EVIL AI would experience might be tremendously helpful in alignment.

Right, and the problem here is that these steps don't seem very straightforward, for a couple of reasons. One is that humans don't seem to want to coordinate to increase the amount of uncertainty any AI would experience. Two is that, even if we did, a superintelligent AI would be intelligent enough to figure out that its certainty is being hampered by humans and work around it. Perhaps our defenses against this superintelligent AI working around these barriers would be sufficient, perhaps not. It's intrinsically hard to predict when going up against something much more intelligent than you. And that's the problem.

The defense of forcing background diversity is that it directly influences someone's ability to contribute to the organization.

This isn't really a good characterization of DEI policies. You'd have to replace "background" with something like "superficial" or "demographic." But, in any case, the argument still works when considering "background," as below.

"you need more [women/blacks/etc] because it will add perspectives you haven't considered"

These are what I'd consider strawman/weakman versions of DEI, not the actual defensible portion of DEI. Even DEI proponents don't tend to say that the mere shade of someone's skin is, in itself, something that makes their contribution to the organization better. The argument is that the shade of their skin has affected their life experiences (perhaps you could call this their background - but, again, DEI isn't based on those life experiences, it's based on the superficial characteristics) in such a way as to inevitably influence the way they think, and the addition of diversity in the way people think is how they contribute better to the organization. This argument has significant leaps of faith that make it fall apart on close inspection, but it's still quite different from saying something like that someone's skin color has direct influence to diversity of thought, which would be a leap very few people would be willing to make.

Whereas with targeting ideological diversity, someone who has a different ideology, by definition, adds a different perspective. That is a direct targeting of the actual thing that people are considering as being helpful to the organization, i.e. diversity of thought.

So again, no, the very concept of "DEI for conservatives," at least in the context of diversity of thought, is just incoherent. If people were calling for putting conservative quotas in the NBA or something, that might work as a comparison.

No, it is not identical. I explained the significant difference in the above comment. DEI is specifically about adding diversity of things believed to be correlated with diversity of thought while this is an actual instance of directly adding diversity of thought. There's plenty to criticize about adding diversity of thought in this way, but it's categorically different from adding diversity of demographic characteristics under the belief that adding such diversity would increase diversity of thought.

Because I don't think most citations of pundits here are met with this kind of backlash. I perceive Hanania to be singled out as particularly lacking in credibility. My response is not that Hanania is necessarily correct on any issue, but rather that he should not be dismissed for reasons unrelated to his actual positions.

No one's dismissing Hanania, though, and I don't perceive him as being particularly singled out here.

To your objections towards the end, I'm happy to revise any of the specific language, but I read you as suggesting that a person who has consistently advocated for a single position or narrative without changing it is less trustworthy than a person who has changed their position. This seems unintuitive, to me.

Your read of me is wrong. It's that someone who has consistently advocated for a single position or narrative without changing it in a way that is indifferent to actual evidence that interacts with such a position or narrative is less trustworthy than a person who has changed their position in a way that responds to actual evidence that interacts with such a position or narrative.

I didn't think that was the point of the dog analogy, but if that were, then indeed, you're right it's a poor analogy for this.

The joke isn't a scenario where the dog plays chess under such unusual circumstances that it doesn't mean the dog is smart.

I don't think it would make sense for a dog to be able to play chess at all while also that not meaning that the dog is "smart" in some real sense. Perhaps it doesn't understand the rules of chess or the very concept of a competitive board game, but if it's able to push around the pieces on the board in a way that conforms to the game's rules in a manner that allows it to defeat humans (who are presumably competent at chess and genuinely attempting to win) some non-trivial percentage of the time through its own volition without some marionette strings or external commands or something, I would characterize that dog as "smart." Perhaps the dog had an extra smart trainer, but I doubt that even an ASI-level smart trainer could train the smartest real-life dog in the real world to that level.

And imagine that it's 1981 and someone is showing you their brand new ZX81. The exact same thing happens that happens with the dog, down to you saying that the chess program can be beaten nine times out of ten. Should you conclude that actually, ZX81s are really really smart because playing chess at all is impressive? Or should you conclude that even though humans use general intelligence to play chess, the ZX81 instead uses a very nonhuman specialized method, and the ZX81 isn't very smart despite how impressive playing chess is?

This last sentence doesn't make sense to me either. Yes, I would conclude that the ZX81 uses a very nonhuman specialized method, and I'd characterize its "ability" (obviously unlike a dog, it has no agency) to play chess in this way as "smart" in some real, meaningful sense. Obviously it's not any sort of generalized "smartness" that can apply to any other situation. If we were living at a time when a computer that could play chess wasn't even a thing, and someone introduced me to a chess bot that he could defeat only 9 times out of 10, I would find it funny if he downplayed that, as in the dog joke.

If a few years later the ZX81 was replaced with a Commodore 64, and you couldn't beat the Commodore 64 in chess, would you decide that the ZX81 is dumb, but the Commodore 64 is smart?

I'd conclude that the Commodore 64 is "smarter than" the ZX81 (I'm assuming we're using the computer names as shorthand for the software that they actually run on the hardware, here). Again, not in some sort of generalized sense, but certainly in a real, meaningful sense in the realm of chess playing.

When it comes to actual modern AI, we're, of course, talking primarily about LLMs, which generate text really really well, so it could be considered "smart" in that one realm. I'm on the fence about and mostly skeptical that LLMs will or can be the basis for an AGI in the future. But I think it's a decent argument that strings of text can be translated to almost any form of applied intelligence, and so by becoming really, really good at putting together strings of text, LLMs could be used as that basis for AGI. I think modern LLMs are clearly nowhere near there, with Claude Plays Pokemon the latest really major example of its failures, from what I understand. We might have to get to a point where the gap between the latest LLM and ChatGPT4.5 is greater than the gap between ChatGPT4.5 and ELIZA before that happens, but I could see it happening.

If there's a good chance you'll never get to all of them, I'd recommend just skipping DmC altogether. Again, not a terrible game, but it's such a huge step down compared to the actual DMC games that it's not even in the same class. And the gameplay is so different that it'll just feel like going to a whole different game rather than an upgrade.

Otoh, DMC1 and 3 have very similar combat systems, but 3 is clearly superior to 1's, so you might want to play 1 before 3, to feel the improvement. 4 and 5 are also upgrades in gameplay compared to 3, but not by nearly as much as from 1 to 3.

And yes, DMC games are far more fun than God of War games. I will forever have bitterness towards David Jaffe for creating GOW that not only overahadowed DMC and Ninja Gaiden, but also helped to popularize quick-time-events in mainstream AAA games. Having flashing icons of the button you need to press above the enemy's head in order to pull off special moves doesn't make it more fun or immersive, it just reminds me that I'm playing a video game, not beating up minotaurs! DMC4 and Ninja Gaiden 2 both implemented similar systems far better without having to have flashing icons, but rather by integrating them seamlessly into the core gameplay controls.

I thought that what we were interested in was 1 - we want to know the real process so that we can shape or modify it to suit our needs. So I'm confused as to why, it seems to me, some commentators behave as if the thought box tells us anything relevant.

I think all 3 are interesting in different ways, but in any case, I don't perceive commenters as exploring 1. Do you have any examples?

If we were talking about humans, for instance, we might say, "Joe used XYZ Pokemon against ABC Pokemon because he noticed that ABC has weakness to water, and XYZ has a water attack." This might also be what consciously went through Joe's mind before he pressed the buttons to make that happen. All that would be constrained entirely to 2. In order to get to 1, we'd need to discuss the physics of the neurons inside Joe's brain and how they were stimulated by the signals from his retina that were stimulated by the photons coming out of the computer screen which come from the pixels that represent Pokemons XYZ and ABC, etc. For an LLM, the analog would be... something to do with the weights in the model and the algorithms used to predict the next word based on the previous words (I don't know enough about how the models work beneath the hood to get deeper than that).

In both humans and LLMs, 1 would be more precise and accurate in a real sense, and 2 would be mostly ad hoc justifications. But 2 would still be interesting and also useful for predicting behavior.

Also, about the affirmative action argument, yes, this is why it's a super questionable solution. Can't you imagine some world though in which the de facto violations of meritocracy are so bad that a de jure violation might actually end up improving the de facto situation? I definitely agree that it's usually not a good idea to use bad means for what you think will be good ends.

One major problem with this situation right now is that, for as much as we can certainly imagine that world, the organizations and individuals that society has relied on to check if our real world is at all similar to that imagined world have so destroyed their credibility that we can't actually trust their claims that they verified that our world is similar to that imagined world. It may be possible to regain that credibility within my lifetime, but I'm skeptical that that will happen, and I'm pretty sure it won't happen in any time frame meaningfully shorter than that.

Which is to say, the very notion that there are good ends to be pursued here is contingent upon something that we have no way of verifying is true. That doesn't mean it's not true, but it does mean that it should be taken about as seriously as people claiming that some Jewish conspiracy is what's making Jews so successful or whatever.

Insofar as this is possible, (I believe Searle disagrees that it is), then the room does speak Chinese because it's just a brain.

I'm not sure how one would argue that it's not possible. Is the contention that there's something ineffable happening in neurons that fundamentally can't be copied via a larger model? That seems isomorphic to a "god of the gaps" argument to me.

I'm familiar with the concepts and metaphor you mention here. Could you outline how that applies to this situation? The Constitution is just a piece of paper, much like Executive Orders by the POTUS are - they only mean things insofar as people behave as if they mean things. The POTUS can ignore the Constitution, and his underlings can ignore the POTUS's EOs, and in either case, they'll face consequences only to the extent that people who have the power to inflict consequences on them choose to exercise this power. Is the contention here that Trump is such a cult of personality that this particular EO wouldn't hold up in court or any Constitutional scrutiny, but Trump's underlings will just follow it anyway? If so, it seems that the danger is in Trump being such a cult of personality, rather than any particular EO he might write.

Small weapon, two small weapons, and one big weapon all tie for coolest to me. Any amount of shields automatically reduces coolness due to the implication that the warrior can't just dodge every attack. However, the small shield, two small shields and one big shield probably tie for 2nd, due to the implication that the warrior doesn't even need a proper offensive weapon to defeat his enemies.

When you think about how many events they provide security to (it's way more than the president/presidential candidates), thousands of events per year, it does put things into perspective.

Not really, no. This is the Secret Service, not Joe's Friendly Private Security and Handymen. They're supposed to be the best of the best, and, IIRC from the hearings, they have an explicit zero-failure standard that they hold themselves to. That's a high standard, and anyone who isn't prepared to meet it should not have been employed by it and certainly not have been leading it.

A determined lone actor is hard to detect until it is too late. It requires constant vigilance.

This would be relevant if this was an instance where the shooter used some clever difficult-to-detect method to circumvent protections. I wouldn't describe the assassination attempt as that.

But that's qualitatively different from such a containment thread. The posts in such a containment thread would be determined by things like: what type of person would enjoy posting/reading in such a thread, what type of prompts would such people use, what LLMs such people would choose to use, and what text output such people would deem as meeting the threshold of being good enough to share in such a thread. You'd get none of that by simulating a forum via LLM by yourself.

You had to add on a whole lot of details to the chess example, though. What about someone who has only ever played chess in private against bots and continues to do so indefinitely? Do such people exist? How would we know? I certainly know that I play some single player computer games like that, in ways that literally no other human being on Earth knows that I've played that game, which means that I leave behind no evidence that I played these games regardless of social validation. And my stating that I play games like that could serve as a proof-by-construction that I actually played those games out of a desire for social validation, as a way to have something like this in my back pocket to bring up as an example in a social interaction with someone else.

To me, your analysis seems isomorphic to those who claim that literally everything is political, on the basis that, no matter what topic they're given, they're able to use some chain of logic to connect it to some form of politics. If the bar to cross for being "political" is that someone can make a logical chain that connects it to politics, then the term "political" becomes vapid. Likewise, if all that takes for someone playing some game to be "about social validation" is that you can create some logical chain that explains how that person could be influenced by social validation in some indirect way connected to the game, then "being about social validation" becomes vapid.

It is, when taken in context with the rest of the statement.

No, it's still false while taking the context of your statement and the entire comment thread here.

The claim is that creative expression cannot exist since the player interacts with the programming to create the "work." Therefore, the programmer cannot say what the creative expression "is" any more than the coder of tax preparation software can say his program has "artistic meaning" without the user inputing values, or any more than the designer of a car's ignition system can say his diagram is "artistic expression."

But this claim is, again, simply wrong. So what if the player interacts with the programming to create the "work?" The creative expression of the creator of the software is the boundaries that are set on how the player can interact with the programming. As long as the player isn't hacking the game, no matter what choices the player makes while playing the game, the player's choices are within the boundaries that are the creative expression of the game devs.

Well, if by “visceral response” you mean “heuristic.”

I don't. By "visceral response," I mean a sort of automatic, subconscious, emotional response. A heuristic is something else, which you outline below:

Hearing someone choose the word “females” usually says a lot about their worldview.

But... it doesn't. Referring to women as "females" is just accurate, mainstream, correct use of that word. Claiming that only people with a certain type of worldview tend to use this word that way, and as such, forms a meaningful heuristic with respect to how to react to such usage, is, again, something that appears as motivated reasoning. I've yet to encounter a shred of evidence that usage of the word that way has any correlation with the speaker's worldview, or evidence that anyone has even attempted to collect such evidence.

This is in contrast to terms like "male fantasy" or "male privilege," which are well-known terms from a certain specific well-known ideology or cluster of ideologies. It's certainly possible for people to use those phrases in a way that doesn't invoke those ideologies, but the very concept of characterizing individuals as having "privilege" based on their group identity with respect to sex is something that relates to those ideologies.

As best as I can tell, voting for a third party candidate is about as worthless as any other vote in this context. The odds that my one vote is what takes some third party candidate up from 4.99% to 5.00% or whatever the threshold is is astronomically low. The odds that my one vote takes the candidate's vote count across some threshold such that it allows the party to garner greater clout in some meaningful, true way is much higher, since there are many many such thresholds, but it's still astronomically small.

I very much disagree that college students know that they are sheltered and don’t have life experiences.

You're probably correct on this. But it's still confusing to me why. Everyone knows that everyone is missing something due to having limited experiences. Everyone knows that they fall under the category of "everyone" and therefore they must be missing something. It doesn't take much research to find out that life in the modern West, even as a lower class person, is extremely sheltered and protected compared to the norm of humanity. College students have disproportionately high access to research material and disproportionately high experience doing research. If they truly want to write a good novel or film script about a setting or characters they have little personal experiences with, any mid-level intelligent person in that situation should be able to put 2 and 2 together to realize that they need to step out of their bubble and dive into research to learn about lives and circumstances far different from their own.

Which is why I have to conclude that these people don't have motivations to write good fiction.