@faceh's banner p

faceh


				

				

				
6 followers   follows 2 users  
joined 2022 September 05 04:13:17 UTC

				

User ID: 435

faceh


				
				
				

				
6 followers   follows 2 users   joined 2022 September 05 04:13:17 UTC

					

No bio...


					

User ID: 435

not through any fault of the court but because the attorney responsible isn't motivated to list them for trial until the ducks are all in a row

I mean, I'd just point out that this answers your initial thought:

I don't know that LLMs really could add much since any lawyer would be able to give you a ballpark on likelihood of success and the award range.

Any lawyer can give you the ballpark, but the LLM now makes it 'viable' to file and prosecute a suit as long as it is expected to be barely EV positive.

The cost of getting 'all ducks in a row' should go down substantially.

It could genuinely be fixed (in the short term) by spending a LOT more money on the court system to get competent judges, clerks, assistants to process cases in a timely fashion, update systems to modern tech to increase throughput, and Marshall necessary resources to enforce the court's rulings too.

But Courts are inherently a cost center for any government. Indeed, in Florida, the statutory trend is to draft laws to discourage litigation at every turn. Requiring extra procedural hoops before filing is permitted, forcing pre-suit negotiations or even mediation, and now they're starting to restrict the ability to collect attorney's fees.

No government that I know of actively expands its judicial resources to scale with its economy or population.

There are some issues that have to funnel through the courts (Probate, the disposition of a dead person's stuff, being one of them), but beyond that, in their function as dispute resolver can still 'work' by making the process as ardurous and unpleasant as necessary for the parties to consider cooperation the strictly superior option.


My REAL suspicion is that AI will get good enough at predicting case outcomes that it will discourage active litigation/encourage quick settlements, as you can go to Claude, Grok, and Gemini and feed it all the facts and evidence and it can spit out "75% chance of favorable verdict, likely awards range from $150,000 to $300,000, and it will probably take 19-24 months to reach trial."

And if the other party finds this credible, the incentive for solving things cooperatively become obvious.

the current amount of legal writing vastly exceeds the amount which will ever be appealed, and probably exceeds by one hundredfold the amount which will ever reach the Supreme Court

Yes, this is why SCOTUS has a ton of informal and formal criteria for selecting which cases are worth their time to hear.

But it seems obvious to me that there was a hard bottleneck on how quickly litigants can react to new caselaw and that Courts intentionally avoid making drastic rulings that cause sweeping changes so any given court decision is going to have gaps in it which they will likewise be slow to 'plug.'

I suspect now its as easy as "read this Appellate decision and find me six possible loopholes or procedural methods to delay its implementation to achieve my client's goals, make sure to check the entire corpus of Law Journal Articles for creative arguments or possible alternative interpretations of existing law. Make no mistakes."

(and I'm leaving aside the issue of JUDGES using LLMs to find and create bases for favorable rulings)

Actually that hits on another thing that's been nagging at me.

I don't think our Justice system is AT ALLLLLLLLLL prepared for handling a deluge of litigation fueled by savvy (key point) attorneys who use LLMs to craft aggressive motions, and draft clever briefs in support, and then if they lose on appeal, find gaps in the decision to keep on doing the thing they wanted to do, but with a different underlying justification.

Supreme Court took a year to make one ruling on the Tariff issue. The Administration hops through the gaps left in said decision. If it takes another year for any other case to reach them, the Admin will presumably hop through the gaps in the next decision too.

I will bring up an old suggestion I've made before: Train up 9 LLMs on the writings of the most famous SCOTUS Justices in our history, selecting for some ideological diversity... then let them rule on cases.

It would be... silly to assume that these situations weren't gamed out WELL in advance.

It wouldn't even require coordinating with any of the Justices. Just have three backup plans ready to go, open the appropriate box based on what the decision says.

An LLM could write up a viable alternative with <10 minutes of prompting too, once fed the opinion.

Welcome to the future, kids.

Not wrong.

But there was a pretty convincing case against it put forth in here.

Quoth:

Human hands enjoy a massive, durable nanomachinery advantage

I'm taking a gamble that martial arts/self defense instructors will still be in demand because people will probably prefer to have an instructor that is human, and a human is probably still going to be best suited to demonstrate techniques and movements that another human is expected to learn, and will be a strictly superior training partner, too.

Dats Da Joke.

I've noticed that easily half of the County-level Judges I have worked in front of, especially those that have held their seat a long time without getting called up to Circuit level, are basically glorified clerks for all the legal reasoning they can do. They oversee an assembly line where parties are being shuffled along towards a particular outcome and the Judge just pulls the lever that rubber-stamps the outcome as 'legal.'

There's some selection effect, if you were making bank in private practice no way you'd accept a Judgeship with such little power. But yeah, letting County Judges use LLMs from the bench could only improve things.

Of course, if you ever ask me to identify which half of the Judges I'm talking about, I'll clam up because those are ALSO the ones most likely to be petty and make my job more miserable.

Hoping for the best (AI makes the practice of law more tolerable/less mentally tolling), preparing for the worst (forced to swap to a career that requires working with my hands).

Exactly.

I know full well if I answer the question straightforward that will dictate how the person treats me going forward.

Whereas if you just don't broach the topic with them then generally you can maintain amicable relations indefinitely. I had a guy who I KNOW (thanks to his Facebook posting) is a hard lefty over to my house about a month ago (party I hosted) but there was no discomfort because nobody interrogated anyone else on their positions, and I don't have a ton of political paraphernalia adorning my walls and such. This equilibrium is possible to maintain... but also easy to break.

I daresay sometimes we can even get two meta levels up, discussing the ways in which the human tribal tendency exacerbates certain social problems simply by making it impossible for solutions to get discussed or important actions agreed on. Its very useful to sometimes take a BIG step back and acknowledge we're all overdeveloped primates that barely cling to civilization thanks to having souls (from the theological standpoint) or, for those who prefer it, prefrontal cortexes and the capacity for high order language.

Its an interesting question that depends at least in part on what my overall objective in talking to this person is. In "preserve the relationship" mode I usually couch it to be minimally offensive to the asker and to invite them to "agree" with me on something rather than immediately sort me into the 'enemy' basket.

But when I'm feeling spicy I like to say "well I'd love to see the current Federal Government catch fire and burn down entirely" which is entirely honest as to my core feelings but doesn't actually reveal whether I agree with or don't agree with the current administration's actions.

I had the version of this happen VERY recently where the woman I'm casually interested in ask "are you a Democrat or Republican" and get very insistent I answer. I was stumped just a bit because... well why would you just assume those are the only two options on the table?

Whereas the strictly true answer is "I've been unaffiliated since High School and thus I am not registered as Republican OR Democrat", I opted to say "I voted for Trump, I voted for Desantis, and I did a straight Republican ticket in the last two elections." Somehow this wasn't quite good enough, and I guess her REAL goal was to very cleanly identify which tribe I personally identified with. Fair enough. So I then said "I watched the Turning Point halftime show, not the Bad Bunny one." (not mentioned: "watched" means I sat in a bar that had swapped channels, but I was not particularly interested in the show so I mostly zoned out while it was on.)

We're still talking, though. She's openly Republican so I guess I passed the "not a libtard" smell test.

I don't mind answering the question, but I dislike the vast majority of discussions based around tribal politics (present company excluded) so I will always try to shift the topic to something still 'controversial' but where I can't 100% predict their response ahead of time based on tribal signifiers.

I don't think 14% is a big deal, there's already a great deal of heterogeneity in terms of surgical outcomes for all surgeons overall, but it does exist.

I'd also be suspicious that this could be an artifact of the older surgeons being handed the tougher cases, or handling older patients such that complications are somewhat more likely to arise.

Similar logic to that study about black babies getting 'worse' outcomes when treated by white doctors... which dissolves when accounting for the fact that the white doctors were getting the toughest cases of any given race.

At any rate I'm sure there's more direct ways to assess a surgeon's skills from the outside (although apparently just asking for their IQ results is out?) but finding one that's a reliable, hard-to-fake signal is the challenge.

The main thing I appreciate about LLMs is the general fact that I can ask them to detail the source of all their knowledge and they can generally cite and point to it all so I can double-check myself, whereas I'd guess most doctors would scoff if you tried to "undermine their credibility" in such a way.

As an aside, older is not better for doctors. It's a common enough belief, including inside the profession, but plenty of objective studies demonstrate that 30-40 year old clinicians are the best overall. At a certain point, cognitive inflexibility from old age, habit, and not keeping up with the latest updates can't be compensated for from experience alone.

I definitely believe that younger doctors are more up-to-date in best practices and aren't full of old knowledge that has proven ineffective or even harmful.

But if you could hold other factors approximately equal, I'd still bet my life on the guy whose' seen 10,000 cases and performed a procedure 8000 times over someone who is merely younger but with 1/3 the experience.

Lindy rule and all that. If he's been successfully practicing for this long its proof positive he's done things right.

and some doctors are just that smart, while having the unfair advantage of richer interaction with a human patient.

Yeah, I suspect that even if LLMs are a full standard deviation IQ higher than your average doctor, the massive disadvantage of only being able to reason from the data stream that the humans have intentionally supplied, and not go in and physically interact with the patient's body will hobble them in many cases. I also wonder if they are willing/able to notice when a patient is probably straight up lying to them.

And yet, they're finding ways to hook the machine up to real world sensor data which should narrow that advantage in practice.

And as you gestured at in your comment... you can very rapidly get second opinions by consulting other models. So now that brings us to the question of whether the combining the opinions of Claude AND Gemini AND ChatGpt would bring us even better results overall.

I mean, yeah, if a legislator passes a big, comprehensive new package that revamps entire statutes then there's no readily applicable case law, then its anybody's game to figure out how to interpret it all, an experienced attorney might bring extra gravitas to their argument... I'm not sure they're more likely to get it right (where 'right' means "best complies with all existing precedent and avoids added complexity or contradictions," not "what is the best outcome for the client.")

(ie basically comes down to judgement).

But this is my point. If you encounter an edge case that hasn't been seen, but have a fully fleshed-out fact pattern and access to the relevant caselaw (identifying which is relevant being the critical skill) why would we expect a specialist attorney to beat an LLM? Its drawing from precisely the same well, and forging new law isn't magic, its using one's best judgment, balancing out various practical concerns, and trying to create a stableish equilibrium... among other things.

What really makes the human's judgment more on point (or, the dreaded word, "reasonable") than a properly prompted LLM's?

I've had the distinct pleasure of drilling down to finicky sections of convoluted statutes and arguing about their application where little precedent exists. I've also had my arguments win on appeal, and enter the corpus of existing caselaw.

ChatGPT was still able to give me insightful 'novel' arguments to make on this topic when I was prepping to argue a MSJ on this particular issue by pointing out other statutory interactions that bolster the central point. It clearly 'reasons' about the wording, the legislative intent, the principles of interpretation in a way that isn't random.

Also, have you heard of the new law review article that argues "Hallucinated Cases are Good Law." It argues that even though the AI is creating cases that don't exist out of whole cloth, they do so by correlating legal concepts and principles from across a larger corpus of knowledge and thus they're hallucinating what a legal opinion "should" be if it accounted for all precedent and applied legal principles to a given fact pattern.

I find this... somewhat compelling. I don't think I've encountered situations where the AI hallucinated caselaw or statutes that contradicted the actual law... but it sure did like to give me citations that were very favorable to my arguments, and phrased in ways that sounded similar to existing law. Like it can imagine what the court would say if it were to agree with my arguments and rule based on existing precedent.

I dunno. I think I'm about at the point where I might accept the LLM's opinion on 'complex' cases more readily than I would a randomly chosen county judge's opinion.

At this point, I would trust GPT 5.2 Thinking over a non-specialist human doctor operating outside their lane.

Taking this at absolute face value, I wonder if this is at least partially because the specialists will have observed/experienced various 'special cases' that aren't captured by the medical literature and thus aren't necessarily available in the training data.

As I understand it, the best argument for going to an extremely experienced specialist is always the "ah yes, I treated a tough case of recurrent Craniofacial fibrous dysplasia in the Summer of '88, resisted almost all treatment methods until we tried injections of cow mucus and calcium. We can see if your condition is similar" factor. They've seen every edge case and know solutions to problems other doctors don't even know exist.

(I googled that medical term up there just to be clear)

LLMs are getting REALLY good at legal work, since EVERYTHING of importance in the legal world is written down, exhaustively, and usually publicly accessible, and it all builds directly on previous work. Thus, drawing connections between concepts and cases and application to fact patterns should be trivial for an LLM with access to a Westlaw subscription and ALL of the best legal writing in history in its training corpus.

It is hard to imagine a legal specialist with 50 years of experience being able to outperform an LLM that knows all the same caselaw and law review articles and has working knowledge of every single brief ever filed to the Supreme Court.

I would guess a doctor with 50 years of experience (and good enough recall to incorporate all that experience) can still make important insights in tough cases, that would elude an AI (for now).

This is the type of question that would have caused 50 pages of rowdy debate on a 2014 bodybuilding forum.

DO YOU NOT?

And have everyone competing to be the guy that sets the standard?

I feel like Goodhart's law would become a problem real quick LMAO.

This seems ACTUALLY unfair to guys who are growers not showers.

Seems like they should probably standardize something regarding the uniforms here to make it so jump length isn't so directly correlated with junk length.

Westminster Abbey

And yet when I argue that it's not all that big a deal on the individual level if the Mona Lisa were destroyed, I get dogpiled.

Yeah, I imagine paramedics have a clinical but unvarnished view of human fragility. Most of us are probably, in theory, only a few inches and a bad fall away from parapalegia at any given moment when we're not fast asleep.

Seeing what a short fall to concrete can do to a human, its sometimes amazing to think that we don't keep the whole world (or ourselves) ensconced in bubble wrap at all times.

If we allowed the human advancement for advancement's sake, then our enemies would gain political power.

Ironically, one of the better reasons to get space-based industry going is to try and outrun these Molochian incentives for a while.

My dream is to have a nice little O'Neill Cylinder of my own, tucked inside a nondescript asteroid, powered by fusion, so that I can genuinely just live life in peace, such that there's no major incentive to try and exercise political authority over me and mine.

Unless we think that the drive of the collectivists will not permit them to leave someone alone who could be forced to come into the fold. At which point I'd rather fight them to the death before we get off-planet.

My general response to that is "the market would sort it out" under normal conditions.

We just can't let the existence of human suffering, somewhere, be an excuse to shut down human advancement everywhere.

If we are productive enough to have excess resources lying around after we feed, house, clothe, and entertain ourselves, some of it can probably get thrown at speculative science projects or pure pursuit of knowledge sans profit motive.

Is there demand for it? Probably not that much... but the people that would demand it also happen to be pretty rich.

Some of that also comes down to how you answer the Fermi paradox. If there's a small but nonzero chance of happening across other intelligent life (or the remnants of same) that's a potentially massive payoff, so buying a few lotto tickets 'makes sense' if survival isn't compromised (lol Dark Forest Theory).

Deep Space Telescopes in particular seem to be relatively cheap to deploy and have a small but real chance of discovering something really, really cool... even if not immediately valuable.

If we were moving rapidly towards space industrialization, they'd also be useful for finding ripe targets for Von Neumann Probes.

I can count one of my ratchet clicks away from leftism when I first heard the performance of the poem "Whitey on the Moon."

From 1970, complaining about the moon landing whilst poverty exists.

Just an insane level of scope blindness. "How dare you move the course of human history and the frontiers of exploration forward while I have to pay more for food.

Which ignores that we can walk and chew gum at the same time, but also represents the kind of envious Luddism that threatens to keep us confined to this rock forever.

(And no, this isn't a feature that is limited to the left).

Agreed, but even aiming at the companies manufacturing intermediate parts can work.

(I just have an ETF that holds robotics and automation stocks)

The Chinese dominance is concerning.

Personally I'm bearish on Chinese industry in the medium term, so I'd still prefer holding what few U.S. options exist.

My genuine expectation is the next explosion/bubble (if AGI isn't cracked circa 2028 as seemingly expected) is robotics, specifically automated robotics.

I expect quadcopter-style autonomous drones, human-form-factor robots, and non-humanoid robots are about to see a surge in usage. Elon announcing that they'd literally shut down Tesla production lines to build more Optimus robots seems like one of those screaming sirens indicating what the future brings if he's positioning himself to dominate the production of physical robots NOW.

And this and other indicators have not percolated through to the mainstream awareness yet, so we're absolutely still 'early' to the game. And with the looming population crisis, robots are going to be a NECESSARY tech tree branch to explore.

So what are some companies in that branch of the tech tree that stand to gain from the 'intermediate' phase of the robotics industry?

Do I know specifically which stocks are best to aim for a 5x? Hell no.

Tesla would be a good one... but hard to see it genuinely 5xing in a short period given how inflated its value already seems.


Otherwise, make a few bets on some pharma companies to discover something even cooler than magical weight loss drugs (likely with AI assistance). Problem there is FDA approval being slow.