site banner

Friday Fun Thread for April 24, 2026

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

So, I had an interesting problem at work, that revealed something fascinating I think.

I have to beat around the bush some, so bear with me. We're using a popular framework for our database layer. We went to do things to this database that theoretically the database is capable of, but the framework doesn't support. Sad face. All the web searches, and associated AI formulated answers confirm, it's not possible do said thing in said framework.

Except it is. The Framework is open source. You can just read the source code. Turns out you can ask for the handle to the underlying interop pointer, and it'll just give it to you. You don't even have to do weird fucky things like dig around in private data space. It's a public API call to just get the interop pointer. The driver it's calling is open source too, and you can just call the function you want on the interop pointer it gives you, and it just works. It's fine. If it's confusing, the test cases for the driver in it's github even shows you exactly how to do it, multiple ways. Reading unit tests are awesome for stuff like that. This is the furthest thing from impossible. It's practically spelled out for you with examples if you just read the fucking code.

So, why does AI all think it's impossible? Because as of 3 years ago, this functionality wasn't exposed by the driver. So all the stack exchange questions about this correctly stated that as of 3+ years ago, this was impossible. LLMs got trained on stack exchange (supposedly), and now stack exchange is a dead site. The LLMs (supposedly) killed off the source of knowledge they were being trained on, and now they can't learn that a few years later this task is not just possible, but trivially easy in like, 6 lines of code. Totally within the remit of the typical "how do I do thing" programming question.

why does AI all think it's impossible?

My immediate answer would be because AI does not think. It just rearranges known data. And known data says these things are not done, just as you noted. Moreover, somewhere in the RLHF phase they probably beat the tendency to seek unapproved shortcuts out from it, otherwise it'd advice you to rob a bank when asked how to get money easily. So it'd be trained to pretend things that are not allowed do not exist. So I am not surprised - and I have been in this situation many times, btw.

One of the reasons why I am not worried yet about being replaced by LLMs. Sure, they can generate code now. But generating code is the boringest part of the work. Figuring out which code to generate is the trick. Once you figure out what needs to be done - I am just fine letting the LLM to arrange the bits properly.