@Rov_Scam's banner p

Rov_Scam


				

				

				
3 followers   follows 0 users  
joined 2022 September 05 12:51:13 UTC

				

User ID: 554

Rov_Scam


				
				
				

				
3 followers   follows 0 users   joined 2022 September 05 12:51:13 UTC

					

No bio...


					

User ID: 554

I have no idea. The odd thing is that one of the tasks they specifically advertise their AI for is contract evaluation. I'm not a contract lawyer so I'm in no position to comment, though I wouldn't be surprised if the service they're offering does something that lawyers don't have to do. One of the things that I chuckle about is that they say AI can draft documents. I'm sure it can, but that's kind of irrelevant. I draft a lot of motions, but I'm not reinventing the wheel every time. Usually I have my secretary find a similar motion, change the case caption, and spend 1/2 hour to an hour editing it to fit with the facts of the current case. I don't see how it would save any time by entering those facts into the AI prompt instead, and I can easily see how it could take more time since I'd now have to review the entire document in greater detail so I understood what I was filing, rather than, say, assume that my secretary hadn't touched the part where I explain the summary judgment standard.

I'm not saying they have too many lawyers. I'm saying that if their products were as good as they claim they are, they'd be able to make do with fewer lawyers. They claim 88% of legal tasks can be automated, and legal employees are among the most expensive. What kind of advertising is that? You can use our software to automate your legal work and save! Except we have more lawyers on the payroll than the industry average, and when litigating we hire white shoe firms whose lawyers are of the type who have their secretaries print things out for them. If the technology isn't saving Anthropic any money then why should we believe it will save anyone else money?

You can cite all the reasons why you think Anthropic needs a bigger legal department, and maybe they do, but keep in mind that there are other companies that have other unique issues that Anthropic doesn't have to deal with. For instance, they don't get sued all that often. I represent a subsidiary of a global machinery company based in Japan that got sued a dozen times last month. For one thing. In one jurisdiction. They're getting sued somewhere, for something, multiple times per day. The US arm of the parent company, whom you've certainly heard of, has five people in its in-house legal department. To be fair to Anthropic, once a company starts getting sued constantly they usually hire national coordinating counsel to manage their litigation for them, but they still have to prepare assignments to local counsel and accept service, and do all the other boring things that come with the territory, as well as monitor the litigation and grant settlement authority.

Anyway, of the six openings they're advertising, two deal with vendor contracts, one with datacenter construction, one with customer contracts, one with international compliance and one with "frontier" issues, i.e. problems that don't exist yet and don't have clear answers. M&A and lobbying are the kinds of things that get contracted out and that the in-house team doesn't do much hands-on work with. It's more like the counsel would occasionally meet with/provide reports to a senior member of the legal team, maybe a junior member occasionally supervising it, but not something anyone is doing full time.

I understand what you're saying, but I've actually looked at the job openings, and they're nothing like that. Of six openings, exactly one, [Frontier Counsel], is involved with unusual, cutting edge issues. The rest are just boring stuff like contracts and datacenter construction. And this position appears to be new; Deputy Counsel has an announcement of the opening on her Linkedin from 3 weeks ago, and it may or may not be filled yet, so it's unclear if there is even anyone dedicated to this full-time at present.

The problem I have is that they don't act like they believe AGI is imminent. They say they do because they have to; if they didn't then people would stop giving them money. Just take the legal industry; Anthropic released a report earlier this year that claimed 88% of all legal tasks could be automated by AI, though only a small percentage of those tasks were actually being automated by Anthropic's customers. Meanwhile, they're telling students at a top law school that they should learn to splice cable or something because first year associate jobs will be automated away. Aside from the confidentiality concerns of Anthropic monitoring law firm AI use, and the fact that first year associates have been useless for as long as they've existed, Anthropic's own hiring practices do not suggest that 88% of legal work can be automated away by AI.

I can't find reliable totals for how many lawyers Anthropic employs, but they hired 24 last summer, and I'm sure they had some on the payroll prior to that. A gander at their website also shows several open positions, though these all have different titles and multiple offices listed, so it might be more of a constantly hiring situation. I can't find reliable estimates on their total employee count, but I've seen everything from 2500 to 4500 employees. If they currently have 30 lawyers working for them and 3,000 total employees, that's one lawyer for every 100 employees. That's, to put it mildly, and insane ratio. For comparison, Wal-Mart has 155 in-house attorneys and 2.1 million total employees. FedEx has 60 in-house attorneys for 370,000 US employees. Tech companies have higher ratios, but not that high; Apple and Google are in the 1/200–300 range. These numbers are estimates, of course, and I'm not trying to make the argument that Anthropic doesn't need all these lawyers or that they're hiring more than necessary. My point is that AI doesn't seem to have reduced their reliance on in-house attorneys in comparison to other companies, and this is at a company that should, and supposedly is, having their attorneys make extensive use of their AI tools.

The other thing is that when you look at these job openings, they all have extensive experience requirements. The lowest I saw was 3 years experience, and a few required 10 to 12 years. This is common for in-house positions. There were also a bunch of oddly specific experience requirements, which are often more in the "nice to have" category than anything else. The one requirement that was common to all positions and obviously non-negotiable is that the candidate have an active license in at least one state. Now, I am licensed in three states, and meet absolutely none of the other requirements, though I have been working for 10 to 12 years in wholly unrelated fields. Something tells me that if I were to apply for one of these jobs and somehow got an interview, telling the hiring team that I had mad AI skillz that would allow me to complete 88% of my work and get up to speed on the remaining 12% quickly would not impress them. Then again, being a true believer was one of the requirements, so who knows.

Would you, personally, be in favor of a ground invasion involving 400,000–500,000 US troops? How many US killed in action do you think we should be willing to commit to? 5,000? 30,000? 50,000?

Ha!

Is Trump's invasion un-American and irresponsible? I'm not sure what it's supposed to accomplish. If he wants to remove the Islamic Republic and ensure that they never get nukes, he isn't going to do that by lobbing missiles at them. He needs to put an invasion force together of about a half-million troops to occupy the cities and find and permanently destroy all of the nuclear sites, and make a firm commitment that they will not leave until the mission is accomplished, even if it takes decades. Of course, he won't do that, because it would be incredibly unpopular, but his current stance amounts to some sort of permanent dicking around, and his own intelligence tells him that. If he stops the war and resigns then Vance or whoever might be able to do a sufficient amount of groveling to avoid the worst of the repercussions.

I understand the cringe at @FiveHourMarathon likening it to religion, but there is something apocalyptic about the idea. Not in the sense that it's world-ending, but in the sense that there's something vaguely amazing that's supposed to happen that will change humanity, etc. How are we supposed to know when we've hit AGI? Sam Altman or whoever saying so isn't going to move the needle much, as it will just be perceived as a cynical marketing ploy. If it hits some benchmark that's great but I'm sure by some benchmark we had AGI in 2023. Besides, these benchmarks are all industry inventions, anyway.

OF course, no one in the industry would ever say that we've reached AGI, because that would instantly shut off the money spigot and expose them all as frauds, even if they are true believers. As soon as they describe a product as AGI the expectation level would skyrocket, as this is their supposed end goal, but when the sun goes up, sun goes down, moon goes up, moon goes down, and a month later they're still stuck with a 3% conversion rate, a trillion dollars in debt, and a product that the tech gurus all agree is slightly better than the last iteration, it's over. At that point, no one has any reason to give AI companies any more money.

So if it does happen, it has to happen in a big noticeable way that nobody can ignore. It also has to be an unalloyed good approaching luxury gay space communism, because if it's anything else, Altman et al. are fucked as well. I honestly don't understand the glee with which AI promoters predict that 50% of all "knowledge jobs" will disappear within a year. Hell, the Chief Legal Officer of Anthropic went to Stanford Law School earlier this year and basically told the students that they should all drop out. Do they not understand basic economics? Do they not understand that 50% of the highest-paid workers getting laid off in a year's time would create an economic disaster the likes of which we've never seen? Do they not understand that this will have a ripple effect into non knowledge-work, as cratering demand combined with an employment glut would reduce jobs and depress the salaries of the jobs that remained? Do they not realize that many of the enterprise clients they depend on to pay full-freight for this product will be out of business? Do they not realize that everyone whom they owe money to will also be in a tight spot and will expect to be paid the full amount of the money owed? Do they not realize that the AI companies themselves are likely to go bankrupt in such a scenario? It has to be a messianic vision, because it can't be anything else.