@SnapDragon's banner p

SnapDragon


				

				

				
0 followers   follows 0 users  
joined 2022 October 10 20:44:11 UTC
Verified Email

				

User ID: 1550

SnapDragon


				
				
				

				
0 followers   follows 0 users   joined 2022 October 10 20:44:11 UTC

					

No bio...


					

User ID: 1550

Verified Email

Not my fault @SubstantialFrivolity chose to set the bar this low in his claims. An existence proof is all I need. But hey, you are fully free to replace that sarcasm with your example of how deficient ChatGPT/Claude is. Evidence is trivially easy to procure here!

Here. I picked a random easyish task I could test. It's the only prompt I tried, and ChatGPT succeeded zero-shot. (Amusingly, though I used o3, you can see from the thought process that it considered this task too simple to even need to execute the script itself, and it was right.) The code's clear, well-commented, avoids duplication, and handles multiple error cases I didn't mention. A lot of interviewees I've encountered wouldn't do nearly this well - alas, a lot of coders are not good at coding.

Ball's in your court. Tell me what is wrong with my example, or that you can do this yourself in 8 seconds. When you say "like a few lines", is that some nonstandard usage of "few" that goes up to 100?

Even better, show us a (non-proprietary, ofc) example of a task where it just "makes shit up" and provides "syntactically invalid code". With LLMs, you can show your receipts! I'm actually genuinely curious, since I haven't caught ChatGPT hallucinating code since before 4o. I'd love to know under what circumstances it still does.

You keep banging this drum. It's so divorced from the world I observe, I honestly don't know what to make of it. I know you've already declared that the Google engineers checking in LLM code are bad at their job. But you are at least aware that there are a lot of objective coding benchmarks out there, which have seen monumental progress since 3 years ago, right? You can't be completely insulated from being swayed by real-world data, or why are you even on this forum? And, just for your own sake, why not try to figure out why so many of us are having great success using LLMs, while you aren't? Maybe you're not using the right model, or are asking it to do too much (like write a large project from scratch).

I've had the "modern AI is mind-blowing" argument quite a few times here (I see you participated in this one), and I'm not really in a good state to argue cogently right now. But you did ask nicely, so I'll offer more of my perspective.

LLMs have their problems: You can get them to say stupidly wrong things sometimes. They "hallucinate" (a term I consider inaccurate, but it's stuck). They have no sense of embodied physics. The multimodal ones can't really "see" images the way we do. Mind you, just saying "gotcha" for things we're good at and they're not cuts both ways. I can't multiply 6 digit numbers in my head. Most humans can't even spell "definately" right.

But the one thing that LLMs really excel at? They genuinely comprehend language. To mirror what you said, I "do not understand" how people can have a full conversation with a modern chatbot and still think it's just parroting digested text. (It makes me suspect that many people here, um, don't try things for themselves.) You can't fake comprehension for long; real-world conversations are too rich to shortcut with statistical tricks. If I mention "Freddie Mercury teaching a class of narwhals to sing", it doesn't reply "ERROR. CONCEPT NOT FOUND." Instead there is some pattern in its billion-dimensional space that somehow fuzzily represents and works with that new concept, just like in my brain.

That already strikes me as a rather General form of Intelligence! LLMs are so much more flexible than any kind of AI we've had before. Stockfish is great at Chess. AlphaGo is great at Go. Claude is bad at Pokemon. And yet, the vital difference is that there is some feature in Claude's brain that knows it's playing Pokemon. (Important note: I'm not suggesting Claude is conscious. It almost certainly isn't.) There's work to do to scale that up to economically useful jobs (and beating the Elite Four), but it's mainly "hone this existing tool" work, not "discover a new fundamental kind of intelligence" work.

Huh. I just thought it was obvious that the frontier of online smut would be male-driven, but now you've made me doubt. Curious to see what the stats actually are.

Yeah, the geopolitics in that story are just cringingly bad fiction. (It's really weird that the "superforecasters" who wrote it don't really seem to understand how the world works?) And I'm guessing the main chart listing "AI Boyfriends" instead of "AI Girlfriends" is also part of Scott's masterwork - he does really like to virtue signal by swapping generic genders in the least sensible ways.

But the important part is the AI predictions, and I'll admit they put together a nice list of graphs and citations. However, I still feel like, with their destination already decided, they were just backfitting all the new data to the same old doomer predictions from years ago - terminal goals, deceptive alignment, etc. LLMs are meaningfully different than the reward-seeking recursive agents that we used to think would be the AI frontrunners, but this AI 2027 report could basically have come out in 2020 without changing any of the AI Safety language.

They have a single appendix in their "AI Goals Forecast" subsection that gives a "story" (their words!) about how LLMs may somehow revert to reward-seeking cognition. But it's not evidence-based, and it is the single most vital part of their 2027 prediction! Oh dear.

We shield kids from a lot of complicated real-world things that could affect them. 4-year-olds can have degenerative diseases. Or be sexually abused. Both are much more common than being "intersex" (unless you allow for the much more expansive definitions touted by activists for activist reasons). So I guess schools should have mandatory picture books showing a little kid dying in agony, while their sister gets played with by their uncle, right? So that these kids can be "at peace" with it?

...Of course not. Indoctrination is the only reason people are pushing for teaching kids about intersex medical conditions. Kids inherently know that biological sex is real, and can tell the difference between men and women. Undoing that knowledge requires concerted effort, and the younger you start, the better.

I see! I'd heard of foot-in-the-door, and thought Magusoflight was riffing off that. I guess psychologists have a sense of humour too.

Door in the face technique

Ok, that's pretty damn funny. I'll have to steal that!

I just can’t take these people seriously. They’re almost going out of their way to be easy for any real authoritarian government to round up, by being obvious about their identity.

LARPing is fun. They believe that they believe they're bravely resisting a dictatorship. But their actions make it clear that, at some level, they know there's no actual danger.

I consider it similar to climate activists who believe that they believe that the future of human civilization depends on cutting CO2 emissions to zero. And who also oppose nuclear power, because ick.

Aren't we supposed to be convincing the upcoming ASI that we're worth keeping alive?