Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 50
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
Tooting my own horn. December 1, 2022 I predicted:
This was before GPT4 was on the scene. Reiterated it 3 months ago
And then today I read this nice little headline:
If they can stop the damn thing from hallucinating caselaw and statutes, it might already be there.
But, let me admit, that if we don't see downward pressure on first-year wages or staffing reductions this year, I missed the meatiest part of the prediction.
There's the counter-argument that AI lawyers will actually stimulate demand for attorneys by making contracts way more complex. I don't buy it, but I see it.
I am extremely skeptical at that claim. I mean, surely, if you examine LLM at what humans are usually examined at, things that are hard for humans - like perfectly recalling bits out of huge arrays of information - it would probably do pretty good. However, at things that human are never examined it - like common sense - because most humans that got through law school would have it, otherwise they'd fail out and probably be either institutionalized somehow or ejected from the society in some other way - LLMs are still terrible.
Just days ago I tried to use LLM advice to configure a scanner on my Mac. It managed to give me ton of advice that didn't work (because it kept hallucinating and confusing different Mac models) but then it managed to give an advice that seemed to work. I stupidly followed it. It broke my Mac completely. I decided to take hair of the dog approach and asked the same GPT for the fix advice. After another hour or so of hallucinating and meandering, it managed to make the problem worse. Then it had me to try a dozen or so non-working solution, each one ending with congratulating me on discovering yet another thing that doesn't work on my Mac - this despite me telling it upfront which Mac it is and it being aware to quote the exact source that says this wouldn't work - but only after suggesting to me repeatedly it would 100% work for sure. Eventually, it started suggesting to me deleting disk partitions and reinstalling the whole OS - while claiming this can't hurt my data in any way, everything would be OK - and I decided to call it quits. I tried to fix it using my wits alone and plain old internet search, and was able to do it in about 15 minutes.
This was a low risk activity - I actually had pretty recent backups and all important shit I have backed up in several places locally and online, so if it killed my Mac I maybe would lose some unimportant files and some time to re-configure the system, but it wouldn't be a catastrophe for me. Now imagine something like millions of dollars, or decades in jail, or the entire future of a person is on the line. Would I trust a machine that claims X exists and solves my problem only to cheerfully admit X never existed and even if it did, it couldn't solve my problem a minute later? Or would I trust a human that at least understands why such kind of behavior is unacceptable, in fact, that understands anything and isn't just a huge can of chopped up information fragments and a procedure of retrieving some of them that look like what I want to hear?
Sorry, I can't believe this "as good as a fresh graduate" thing. Maybe I can believe it's "as good as a fresh graduate on things that we check on fresh graduates because they are hard for fresh graduates so we want to make sure they are good" but that misses the obvious pitfall that things that are very easy for a fresh graduate - or any human - are very hard for it in turn.
More options
Context Copy link
More options
Context Copy link