Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.
- 55
- 1
What is this place?
This website is a place for people who want to move past shady thinking and test their ideas in a
court of people who don't all share the same biases. Our goal is to
optimize for light, not heat; this is a group effort, and all commentators are asked to do their part.
The weekly Culture War threads host the most
controversial topics and are the most visible aspect of The Motte. However, many other topics are
appropriate here. We encourage people to post anything related to science, politics, or philosophy;
if in doubt, post!
Check out The Vault for an archive of old quality posts.
You are encouraged to crosspost these elsewhere.
Why are you called The Motte?
A motte is a stone keep on a raised earthwork common in early medieval fortifications. More pertinently,
it's an element in a rhetorical move called a "Motte-and-Bailey",
originally identified by
philosopher Nicholas Shackel. It describes the tendency in discourse for people to move from a controversial
but high value claim to a defensible but less exciting one upon any resistance to the former. He likens
this to the medieval fortification, where a desirable land (the bailey) is abandoned when in danger for
the more easily defended motte. In Shackel's words, "The Motte represents the defensible but undesired
propositions to which one retreats when hard pressed."
On The Motte, always attempt to remain inside your defensible territory, even if you are not being pressed.
New post guidelines
If you're posting something that isn't related to the culture war, we encourage you to post a thread for it.
A submission statement is highly appreciated, but isn't necessary for text posts or links to largely-text posts
such as blogs or news articles; if we're unsure of the value of your post, we might remove it until you add a
submission statement. A submission statement is required for non-text sources (videos, podcasts, images).
Culture war posts go in the culture war thread; all links must either include a submission statement or
significant commentary. Bare links without those will be removed.
If in doubt, please post it!
Rules
- Courtesy
- Content
- Engagement
- When disagreeing with someone, state your objections explicitly.
- Proactively provide evidence in proportion to how partisan and inflammatory your claim might be.
- Accept temporary bans as a time-out, and don't attempt to rejoin the conversation until it's lifted.
- Don't attempt to build consensus or enforce ideological conformity.
- Write like everyone is reading and you want them to be included in the discussion.
- The Wildcard Rule
- The Metarule
Jump in the discussion.
No email address required.
Notes -
Tooting my own horn. December 1, 2022 I predicted:
This was before GPT4 was on the scene. Reiterated it 3 months ago
And then today I read this nice little headline:
If they can stop the damn thing from hallucinating caselaw and statutes, it might already be there.
But, let me admit, that if we don't see downward pressure on first-year wages or staffing reductions this year, I missed the meatiest part of the prediction.
There's the counter-argument that AI lawyers will actually stimulate demand for attorneys by making contracts way more complex. I don't buy it, but I see it.
Sure, but hasn't that always been the challenge? This feels like it boils down to "if they can fix the problems, it'll be great", which is true but applies to everything.
I mean, yes, but the hallucination problem of putting in wrong cases and statutes is utterly disqualifying in advanced legal writing. Citing to a nonexistent case or statute compromises the entire brief or argument. A decent first year associate might misinterpret a statute or case, or miss that the case was overturned, but they wouldn't make up cases from whole cloth and build their arguments off those.
For a lot of tasks, you just need to go through and proofread or fix up the places where it filled in basic info that it obviously didn't have.
But citing a case that doesn't exist to build an argument is like asking it to design a bridge and it get the tensile strength of steel completely wrong, or perhaps it makes up a type of material that doesn't exist and hallucinates its properties as part of the specifications.
And maybe it does that, I don't know. But there's literally no reason for it to be doing that, either, when there is definitive information, easily available for reference. Its information it should never get wrong, in practice.
And it really shouldn't be hard to fix, the caselaw and statutes are already simple to look up. Just teach the thing to use WestLaw.
So I do expect them to solve that particular class of hallucinations pretty handily, even if it will still completely fudge its outputs when it doesn't have an easy way to check.
Yeah this is something that gets me about the frequent code-based hallucinations too. The things will make up non-existent APIs when the reference docs are right there. It does seem like it wouldn't be hard to hook up a function that checks "does this actually exist". I assume it must not actually be that simple, or they would've done it by now. But we'll see what they can do in the future.
More options
Context Copy link
There's some technical parts to how LLMs specifically work that make it a lot harder to police hallucination than to improve produce a compelling argument, for the same reason that they're bad at multiplication and great at symbolic reference work. A lot of LLMs can already use WestLaw and do a pretty good job of summarizing it... at the cost of it trying to cite a state law I specifically didn't ask about.
It's possible that hallucination will be absolutely impossible to completely solve, but either way I expect these machines to become better at presenting compelling arguments faster than I expect them to be good researchers, with all the good and ill that implies. Do lawyers value honesty more than persuasion?
One would think! And yet.
This is my biggest problem with rlhf aside from my free speech bullshit - due to the way llms work, rlhf means hallucination is impossible to solve - it is baked in.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link