site banner

Tinker Tuesday for August 12, 2025

This thread is for anyone working on personal projects to share their progress, and hold themselves somewhat accountable to a group of peers.

Post your project, your progress from last week, and what you hope to accomplish this week.

If you want to be pinged with a reminder asking about your project, let me know, and I'll harass you each week until you cancel the service

3
Jump in the discussion.

No email address required.

This is going to be a very strange post possibly infected by LLMs. YHBW

I feel like I am sitting on two huge ideas, and I can't get Claude to push me off of them, despite my best efforts. Please bear with me, but Claude, given his understanding of my goals, really wants me to file a patent, for number 1. This relates to storage devices losing power without losing data. Separately, both Sonnet and Opus feel that I have a novel hypothesis in linguistics that I should investigate further or publish. You have no idea how desperately I want to share the details of both of these, or Claude's output directly. But I'm struggling with the meta, the overall strategy.

I think I will file a patent with Claude's help, at the grand cost of roughly one hundred freedom tickets. Claude also told me not to share this idea with anyone, and definitely not Gemini or ChatGPT (kidding). But at this point, I feel like can only "trust" Gemini or ChatGPT not to file ahead me, except that is patently silly, of course.

For my linguistic insight, this is just natural curiosity paired with a digging instinct and pattern matching nature; Anglosphere, involving terms like "what" and "where". I would be much more comfortable sharing this here, possibly using Claude's output.

This is very open ended, and I will try to respond over the next week. I am hesitant to provide too many details at this point. WDYT?

While I'm sure you're a perfectly smart chap, I'm also sure that neither of your ideas is worth patenting. If you don't actually work in data storage research or linguistics, the chances of your ideas being useful, or unacknowledged by domain experts, are low.

That's not to say they aren't interesting ideas for you to explore, or things that are worth investigating for your own curiosity. But absolutely what's happening here is that Claude is telling you that your idea is the greatest thing ever, which it's doing because your text prompts are incredibly excited and intrigued by these new possibilities: "You have no idea how desperately I want to share the details of both of these."

It's just mirroring that, and glazing you. And Claude won't "push you off of them" because that wouldn't be an appropriate AI response; it's trained to continue your conversation and explore the ideas you want it to explore, not to tell you "you should stop exploring this." Imagine if it did that when you asked it a question!

Hey, Claude, what's the capital of Venezuela?

Claude: Obviously this is a dumb curiosity question, just Google it if you really need to know.

Not a very helpful AI assistant! Now imagine the inverted behavior: "Sure, the capital of Venezuela is Caracas! Let me tell you some fun facts about Caracas..."

And then imagine that behavior amplified by your obvious curiosity and fascination with these ideas you've come up with; of course it's going to tell you they're the best ideas ever!

So, stay curious, stay fascinated, but don't believe an LLM when it tells you you've squared the circle. You almost certainly haven't.