@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 3 users  
joined 2022 September 06 20:44:12 UTC
Verified Email

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 3 users   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

Verified Email

I think the key to getting good results is figuring out how to get a verifiable success/failure signal back into the LLM's inputs. If you've got an on-premise application and as such have no access to logs and such from the customer, I expect the place you'll see the most value is a prompt which is approximately "given [vague bug report from the user], come up with a few informed hypotheses for what it could be by looking at the codebase, and then, for each hypothesis (and optionally "and also my pet hypothesis of XYZ" if you have a hypothesis) , iteratively create a script which would reproduce the bug on this local instance of the stack if the hypothesis were correct [details of local instance]".

As an added bonus, the code to repro a bug is hard to generate but easy to verify, and generally nothing is being built on top of it so if the LLM chooses bad or weird abstractions it doesn't really matter.

If you're represented by a lawyer, why are you asking an LLM questions about your case. That's the point of having a lawyer! If the lawyer can't explain things sufficiently, then you simply don't have a good lawyer.

I expect many people do not have a good lawyer, by this standard. Or can't afford a good lawyer, for the number of hours of the lawyer's time they'd need to actually understand what's going on with their case and what that means in practical terms for them.

Any email sent from the attorney's account is going to be encrypted, and they should ensure that any email the client sends will be encrypted as well.

To your point, this is probably the real answer. If the logs of the chat don't exist, they're not going to show up in court.

As far as I can tell

  1. Google didn't actually get sanctioned for that
  2. The only documents Google actually had to provide were the ones where the attorney did not say literally anything at all in the thread
  3. And only about 80% of those, even

I agree with your expectation that judges would not like that product very much.

Well yes. I expect this is mostly bottlenecked on lawyers, and on reflection I mostly expect that it'd be a white label "AI boosted chat with lawyer" product that law firms could offer rather than "the AI lawyer company". Maybe combined with inbound lead generation in the form of a directory of firms that offer the service.

New case law just dropped[^1]: a guy was charged with a $300M securities fraud. Before his arrest he used Claude (the consumer product) to research his own legal situation. He then handed the outputs to his defense counsel and claimed attorney-client privilege. The prosecutor said "no, that's not how this works, that's not how any of this works", and the judge agreed[^2]. That means that as of this decision, precedent says that if you go to chatgpt dot com and say "hey chatgpt, give me some" legal advice, that's not covered under attorney-client privilege.

On the one hand, duh. On the other hand, it really feels like there should be a way to use LLMs as part of the process of scalably getting legal advice from an actual attorney while retaining attorney-client privilege.

I expect there's an enormous market for "chat with an AI in a way that preserves attorney-client privilege", and as far as I can tell it doesn't exist.

It was also interesting to read the specific reasoning given for why attorney-client privilege was not in play:

The AI-generated documents fail each element of the attorney-client privilege. They are not communications between the defendant and an attorney. They were not made for the purpose of obtaining legal advice. And they are not confidential. Each deficiency independently defeats the defendant's privilege claim.

I notice that none of these reasons are "conversations with AI are never covered by attorney-client privilege." They're all mechanical reasons why this particular way of using an AI doesn't qualify. Specifically:

  1. Claude is not an attorney, therefore Claude is not your attorney, therefore this wasn't communications between a defendany and their attorney.
  2. Sending a document to your lawyer after you create it does not retroactively change the purpose that the document was created for.
  3. Anthropic's consumer TOS says they can train on your conversations and disclose them to governmental authorities, and so the communications are not confidential.[^3]

The prosecutor also argues that feeding what your attorney told you into your personal Claude instance waives attorney-client privilege on those communications too. If a court were to agree with that theory, it would mean that asking your LLM of choice "explain to me what my lawyer is saying" is not protected by default under attorney-client privilege. That would be a really scary precedent.[^4]

Anyway, I expect there's a significant market for "ask legal questions to an LLM in a way that is covered by attorney-client privilege", so the obvious questions I had at this point were:

  1. Is there an existing company that already does this, and are they public / are they looking to hire a mediocre software developer?
  2. If not, what would it take to build one?

For question 1, I think the answer is "no" - a cursory google search[^5] mostly shows SEO spam from

  • Harvey, which as far as I can tell from their landing page is tools for lawyers (main value prop seems to be making discovery less painful)
  • Spellbook AI (something about contracts?)
  • GC AI, which... I read their landing page, and I'm still not sure what they actually do. They advertise "knowledge base capabilities" of "Organize", "Context", "Share", and "Exact", and reading that page left me with no more actual idea of their business model than before I went there.
  • Legora has a product named "Portal" which describes itself as "A collaborative platform that lets firms securely share work, exchange documents, and collaborate with clients in a seamless, branded experience." but seems to be just a splash screen funneling you to the "book a demo" button.

So then the question is "why doesn't this exist" - it seems like it should be buildable. Engineering-wise it is pretty trivial. It's not quite "vibe code it in a weekend" level, but it's not much beyond that either.

After some back-and-forth with Claude, I am under the impression that the binding constraints are

  1. The chat needs to be started by the attorney, rather than the client - Under the Kovel doctrine [^6], privilege extends to non-lawyer experts only when the attorney engages them
  2. The agreement with LLM providers commits to zero training and no voluntary disclosure to authorities (pretty much all the major LLM providers offer this to enterprise customers AFAICT)
  3. It needs some way of ensuring that the chats are only used for the purposes of getting legal guidance on the privileged matter

None of these seem insurmountable to me. I'm picturing a workflow like

  1. Client signs up for an account
  2. Client is presented with a list of available lawyers, with specialties
  3. Client chooses one
  4. That lawyer gets a ping, can choose to accept for an initial consultation about a matter
  5. Lawyer has a button which opens a privileged group chat context between them, the LLM, and the client about that matter
  6. Lawyer clicks said button.
  7. In the created chat, the client can ask the LLM to explain legal terminology, help organize facts or documents the lawyer requested, or clarify what the lawyer said in plain language, or do anything else a paralegal or translator could do under the attorney's direction.

Anyone with a legal background want to chime in about whether this is a thing which could exist? (cc @faceh in particular, my mental model of you has both interest and expertise in this topic)


[^1]: [United States v. Heppner, No. 25 Cr. 503 (JSR) (S.D.N.Y. Feb. 6, 2026)] (https://storage.courtlistener.com/recap/gov.uscourts.nysd.652138/gov.uscourts.nysd.652138.22.0.pdf). The motion is well-written and worth reading in full. [^2]: Ruled from the bench on Feb 10, 2026: "I'm not seeing remotely any basis for any claim of attorney-client privilege." No written opinion yet.
[^3]: This argument feels flimsy, since attorneys send privileged communications through Gmail every day, and Google can and regularly does access email content server-side for reasons other than directly complying with a subpoena (e.g. for spam detection). It could be that the bit in Anthropic's TOS which says that they may train on or voluntarily disclose your chat contents to government authorities is load-bearing, which might mean that Claude could only be used for this product under the commercial terms, which don't allow training on or voluntary disclosure of customer data. I'm not sure how much weight this particular leg even carried, since Rakoff's bench ruling seems to have leaned harder on "Claude isn't your attorney."
[^4]: A cursory search didn't tell me whether the judge specifically endorsed this theory in the bench ruling. So I don't know if it is a very scary precedent, or just would be a really scary precedent.
[^5]: This may be a skill issue - I am not confident that my search would have uncovered anything even if it existed, because every search term I tried was drowned in SEO spam.

... now you're just threatening me with a good time. Can we advocate that the states ignore the FDA too while we're here?

This sets up some pretty fucked incentives. Setting up fucked incentives has historically not gone well.

And the guy behind ClawdBot / MoltBook (or whatever its called now) has openly discussed how his own deployment of ClawdBot was thinking and executing ahead of him.

I will point out that MoltBook had exposed it's entire production database for both reads and writes to anyone who had an API key (paywalled link, hn discussion).

And this is fairly representative of my experience with AI code on substantial new projects as well. In the process of building something, whether it's something new or something legacy, the builder will need to make thousands of tiny decisions. For a human builder, the quality of those decisions will generally be quite tightly correlated to how difficult it is for a different human to make a good decision there, and so, for the most part, if you see signs of high-thoughtfulness polish in a few different part of a human-built application that usually means that the human builder put at least some thought into all the parts of that application. Not so for "AI agents" though. One part might have a genuinely novel data structure which is a perfect fit for the needs of the project and then another part might ship all your API keys to the client or build a SQL query through string concatenation or drop and recreate tables any time a schema migration needs to happen.

That's not to say the "AI coding agent" tools are useless. I use them every day, and mostly on a janky legacy codebase at that. They're excellent for most tasks where success is difficult or time-consuming to achieve but easy to evaluate - and that's quite a lot of tasks. e.g.

  • Make an easy-to-understand regression test for a tricky bug: "User reports bug, expected behavior X, observed behavior Y. Here's the timestamped list of endpoints the user hit, all associated logs, and a local environment to play around in. Generate a hypothesis for what happened, then write a regression test which reproduces the bug by hitting the necessary subset of those endpoints in the correct order with plausible payloads. Iterate until you have reproduced the bug or falsified your hypothesis, If your hypothesis was falsified, generate a new hypothesis and try again up to 5 times. If your test successfully reproduces the bug, rewrite it with a focus on pedagogy - at each non-obvious step of setup, explain what that step of setup is doing and why it's necessary, and for each group of logically-connected assertions, group them together into an evocatively-named assert() method."
  • Take a SQL query which returns a piece of information about one user by id and rewrite it to performantly return that information for all users in a list
  • Review pull requests to identify which areas would really benefit from tests and don't currently have them
  • Review pull requests to identify obvious bugs

I think about this a lot, but I also catch myself thinking about how easy it must have been in the 90s to find alpha in X, and then realize that with the knowledge I have now it would be easy, but that obtaining that not-yet-common knowledge would have been much harder in the 90s. I'm sure that there's similar alpha available today if you know where to look for it, but if it was easy to find, it wouldn't be alpha.

Even Disney World has MBA'd itself into a place I would no longer remotely describe as the "happiest place on earth".

I actually went to Disneyland with my wife and daughter a couple months ago, and I was shocked by how much it wasn't MBA'd. The tickets were cheaper (inflation-adjusted) than they were when I was a kid, the food was decently good and not horribly expensive (~$20 / meal for decent bbq with big enough portions that we only needed one full meal plus a few snacks during our entire time from park open to park close), there weren't really any of the rigged carnival games that are optimized to make it seem like you just barely missed the big prize and should just try One More Time that you see in other amusement parks, and the lines didn't shove ads in your face (again, unlike other amusement parks). Possibly I just went in with sufficiently low expectations that I was pleasantly surprised.

Plus Trump's team is already running into random vigilante judges in farflung circuit courts attempting to adjust whatever they pass.

I think this is a symptom of the things where the legislative branch refuses to legislate, leading to a power vacuum which both the executive and the judiciary branches try to fill.

Take the statement "I think ICE was in the right during the recent shooting, because <reason>".

Take X, and plug it into the statement "I think we should go down the list of registered Democratic voters and send hit squads to their houses to kill everyone present, because <reason>".

Does the sentence still make sense?

Example A: <reason>="because declining to enforce laws if bystander-activists actively make the situation more dangerous sets up terrible incentives". "Kill the dems" statement makes no sense if you put this reason in, thus it is not an example of the sort of thing DiscourseMagnus was talking about.

Example B: <reason>="because the dems had it coming for ruining our country". "Kill the dems" statement does make sense if you put this reason in, thus it is an example of the sort of thing DiscourseMagnus was talking about.

TBH on here I don't see much of example B. On xitter I do, but the discourse here on the motte has been refreshingly free of that for the most part. I do agree with DiscourseMagnus that example B is bad and the sort of thing I want to see less of, but I don't agree with his implication that it's the sort of thing I see a lot of here.

This really seems like a case where you should petition your elected representatives to change the laws. If our legislators actually started legislating that would help a lot with the current power struggles between the judicial and executive branches, and maybe having their constituents getting on their case for failing to legislate would help with that.

Midterm elections are in 9 months. One way to lose is by declining to try, but another way to lose is deciding to try really hard, fucking everything up badly in a highly legible way, and being booted out of your position.

It's about a 30 min walk / 10 min uber from rockridge bart, so pretty doable. There's a sequences reading group every Tuesday at 6:30 pm at lightcone if you want to get the full bayrat experience (cc @falling-star).

The most sobering part? It’s domestic. Funded, trained (somewhere), and directed by people who live in the same country they’re trying to paralyze law enforcement in

Pangram says 100% AI generated. Make what assessments you will about the reliability of the author and how likely it is that they're actually a former Special Forces Warrant Officer.

But the rest of the rhyme is correct.

Thirty days have September, April, May, and December. All the rest have thirty one, save February, which is "fun".

I think "I can write assembly code better than the compiler" is usually true if and only if we are "unfair" to the compiler. As such,

As you pointed out, the assembly you've written does not match the C code, and would not be correct for the compiler to produce.

Yep. It would not be correct, in the general case, for the compiler to produce this code. As the human programmer, you have more context, and are able to determine that it is correct to produce this code in this particular place though. I do agree with you that "write code that compiles to the fast assembly" is probably the right way here, but often you don't even realize that the optimization is necessary until you benchmark, and reading the assembly will tell you what shape of optimization you need. What you do about that will vary. The correct answer is rarely "write assembly", but that's usually not because you couldn't write assembly that served your needs better than the compiler's asm, but instead because the maintenance burden of your own asm is large.

Anyway, that was kind of a toy example, but it did rhyme with a real case I've run into IRL, where compiling idiomatic code led to assembly which was suboptimal in this way (even with -O3 -march=native -ffast-math).

It also rhymes with things I've observed about LLM-assisted coding: even now, most of the ways LLMs fail in everyday situations are due more to them lacking context and affordances than them being less capable than a human with the same information and affordances. An LLM might be given a codebase and a ticket describing a change and have to make educated guesses about how the code is called in practice, while a human given the same ticket might go "oh, I need more information, let me go look at the logs to see what order these calls happen in prod" before touching code. The LLM might also even have tools to pull those logs, but not know when to use those tools (I do see this quite a bit too).

Do you have a source on this being the justification the US is using?

I like that analogy. However, there's one point that applies here, and that I think will also apply to LLM-generated code: at no point did it become impossible for an assembly programmer to improve the output generated by an optimizing compiler.

Even today, finding places where your optimizing compiler failed to produce optimal code is often pretty straightforward[1]. The issue is that it's easy to have the compiler write all of the assembly in your project, and it's easy from a build perspective to have the compiler write none of the assembly in your project, but having the compiler write most but not all of the assembly in your project is hard. You have many choices for what to do if you spot an optimization the compiler missed, and all of them are bad:

  1. Hope there's a pragma or compiler flag. If one exists, great! Add it and pray that your codebase doesn't change such that your pragma now hurts perf.
  2. Inline assembly. Now you're maintaining two mental models: the C semantics the rest of your code assumes, and the register/memory state your asm block manipulates. The compiler can't optimize across inline asm boundaries. Lots of other pitfalls as well - using inline asm feels to me like a knife except the handle has been replaced by a second blade so you can have twice as much knife per knife.
  3. Factor the hot path into a separate .s file, write an ABI-compliant assembly function and link it in. It works fine, but it's an awful lot of effort, and your cross-platform testing story also is a bit sadder.
  4. Patch the compiler's output: not a real option, but it's informative to think about why it's not a real option. The issue is that you'd have to redo the optimization on every build. Figuring out how to repeatably perform specific transforms on code that retain behavior but improve performance is hard. So hard, in fact, that we have a name for the sort of programs that can do it. Which brings us to
  5. Improve the compiler itself. The "correct" solution, in some sense[2] — make everyone benefit from your insight. Writing the transform is kinda hard though. Figuring out when to apply the transform, and when not to, is harder. Proving that your transform will never cause other programs to start behaving incorrectly is harder still.
  6. Shrug and move on. The compiler's output is 14x slower than what you could write, but it's fast enough for your use case. You have other work to do.

I think most of these strategies have fairly direct analogues with a codebase that an LLM agent generates from a natural language spec, actually, and that the pitfalls are also analogous. Specifically:

  1. Tweak your prompt or your spec.
  2. Write a snippet of code to accomplish some concrete subtask, and tell the LLM to use the code you wrote.
  3. Extract some subset of functionality to a library that you lovingly craft yourself, tell the LLM to use that library.
  4. Edit the code the LLM wrote, with the knowledge that it's just going to repeat the same bad pattern the next time it sees the same situation (unless you also tweak the prompt/spec to avoid that)
  5. I don't know what the analogue is here. Better scaffolding? Better LLM?
  6. Shrug and move on.

I do think there's a decent chance that some combination of 1 and 4 will work for LLM-generated code in a way that wasn't really viable for assembly, but that might just be cope.


Footnotes


^[1]: For a slightly contrived concrete example that rhymes with stuff that occurs in the wild, let's say you do something along the lines of "half-fill a hash table with entries, then iterate through the same keys in the same order summing the values in the hash table", like so.

// Throw 5M entries into a hashmap of size 10M
HashMap h;
h->keys = calloc(10000000 * sizeof(int));
h->values = calloc(10000000 * sizeof(double));
for (int k = 0; k < 5000000; k++) {
    hashmap_set(h, k, randn(0, 1));
}

// ... later, when we know the keys we care about are 1..4999999
double sum = 0.0;
for (int k = 0; k < 5000000; k++) {
    sum += hashmap_get(h, k);
}
printf("sum=%.6f\n", sum);

Your optimizing compiler will spit out something along the lines of

...
# ... stuff ...
                                        # key pos = hash(key) % capacity
.L29:                                   # linear probe loop to find idx of our key
    cmpl    %eax, %esi
    je      .L28
    leaq    1(%rcx), %rcx
    movl    (%r8,%rcx,4), %eax
    cmpl    $-1, %eax
    jne     .L29
.L28:
    vaddsd  (%r11,%rcx,8), %xmm0, %xmm0  # sum += values[idx]
# ... stuff ...

This is the best your compiler can do, since the ordering of floating point operations can matter. However, you the programmer might have some knowledge your compiler lacks, like "actually the backing array is zero-initialized, half-full, and we're going to be reading every value in it and summing". So you can replace the "optimized" code with something like "Go through the entire backing array in memory order and add all values".

# ... stuff ...
.L31:
    vaddsd  (%rdi), %xmm0, %xmm0
    vaddsd  8(%rdi), %xmm0, %xmm0
    vaddsd  16(%rdi), %xmm0, %xmm0
    vaddsd  24(%rdi), %xmm0, %xmm0
    addq    $32, %rdi
    cmpq    %rdi, %rax
    jne     .L31
# ... stuff ...

I observe a ~14x speedup with the hand-rolled assembly here. And yet, in real life, I would basically never hand-roll assembly here.

^[2]: Whenever someone says something is true "in some sense", that means that thing is false.

At least one person on this forum is associated with people in the tpot/post-rat scene on twitter.

Specifically I'm calling "first full paragraph was written by OP, rest was written by LLM told to continue the thought"

Happens all the time with newer players playing at a casino. They come in, sit down all fast and loose, every veteran at the table can see it a mile a while. What happens on every hand? If the new guy leads a bet ... fold,fold,fold,fold,fold. You just wait them out. Eventually, they get bored (quite quickly, actually) because there is "no action at this table!"

If you're at a table that looks like this, and you want to make money, you're at the wrong table. Honestly "they (plural!) get bored" should have already told you that there was more than one skilled player at the table, and therefore that it was a bad table.

Why would bytedance fake it, or why would some specific employees of bytedance fake it? The former is a hard question (bytedance the company does not need or benefit from prestige), but the latter is much easier (individual employees absolutely do).