@ace's banner p

ace


				

				

				
1 follower   follows 0 users  
joined 2022 September 04 21:37:31 UTC
Verified Email

				

User ID: 168

ace


				
				
				

				
1 follower   follows 0 users   joined 2022 September 04 21:37:31 UTC

					

No bio...


					

User ID: 168

Verified Email

I have similar experiences, but the LLMs will correct their correct answer to be incorrect. I now just view the whole project as useful for creative idea generation, but any claims on the real world need to be fact checked. No lab seems to be able to get these things to stop confabulating, and I'm astonished people trust them as much as they seem to.

I want to believe, but I asked Gemini 2.5 Pro to spec a computer for me, and it starts hallucinating motherboards that don't exist, insisting on using them even after being told they don't exist. Maybe it's OK for brainstorming, but everything it says needs to be double-checked. We ain't there yet.

This is a form of Gell-Mann amnesia effect. When there's instant feedback and excellent legibility of when answers are correct vs incorrect, like programming, we instantly see the flaws. But on softer squishier questions, you accept the answers. But it's all similarly bad AI slop.