site banner

Friday Fun Thread for April 10, 2026

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

After running for tens of minutes and spitting out a bunch of technical data about the API, it gave me this message:

This is hilarious, but a useful illustration of how people come to think of LLMs as useless. Claude in an agentic harness (e.g. Claude code) easily researches new APIs and figures out how to get information out of them. For example, I was trying to get historical heat index information and CC presented several possible data sources, I asked if it could use a different one, and it looked at it and figured out how to call the API and extract the relevant info.

I agree, but it goes to the heart of my fundamental disagreement with the way AI is presented to the public. If Claude Code does that, that's great, but I wouldn't think I needed a coding LLM to look up basic statistical data from government sources. So whn someone like me who wants to use it for other things that seem like are in its wheelhouse try it and get crap for a result, we get pissed off. Believe me, this is only one of the LLM-assisted fails I've experienced in the past month. So I get the inevitable response of "Well, if you were using the frontier deluxe model that costs $200 a month..." at which point I cut you off and say "No. This software hasn't given me any indication that it's worth $20/month, let alone $200." It's like a mirage, where what I'm looking for is always off in the distance but I never seem to get there. We're now at a point where companies in perhaps the only industry in history that's worth a trillion dollars despite not being profitable at all have to use all that compute power to subsidize nonsense from the trivial (AI girlfriends) to the actively harmful (cheating on term papers) because they've relied on a business model where they'll grow rapidly by creating a hype cycle that allows them to raise eye-watering sums from venture capital to develop an expensive product with limited commercial use.

In a rational world, OpenAI would have remained a research nonprofit that allowed things like universities and the government to use its models for free until they had developed to the point that there was a viable commercial use for them other than creating glorified chatbots. And when that point came, the hype cycle would hopefully be muted enough that companies wouldn't implement them unless they were seeing real returns. Instead they've created this world where they've spent more money than they could ever hope to earn creating products that don't make money and still have pathetic monetization rates that they've gotten into the habit of offering to the general public for free. And they keep creating more bullshit to justify it like "inference is profitable". Really? Because when I hear that, I hear "If we ignore all of our expenses except one category, the company makes money". It's like justifying pouring money into a failing retail outlet because you sell every item for less than you paid the supplier for it. "We're profitable if you only look at COGS!" And even that isn't entirely the truth, since a large percentage of this revenue from inference comes from other AI startups like Perplexity that are themselves unprofitable hype machines propped up by venture capital. I apologize for the rant, but if you want me to believe in this technology that fails to do everything I ask it to that it could theoretically do faster than I can myself, you can't keep telling me that it's only because I'm not paying enough money. Because I'm sure that when Oeuvre or whatever they call the next Claude model comes out that cost ten times as much to train and five times as much to run, I'll be told that Opus or CC or whatever couldn't handle it but if I only paid the price of admission all my problems would be solved.

Edit: I ran the query again and it did try to code something to get access to the API, but was unsuccessful. It also failed to recognize that a lot of this data, if not all of it, doesn't require access to the API and is available in PDF documents available on third-party websites.

To be clear, you can use Claude code with the $20 a month plan. You can even use something like opencode and a cheaper model paid by the token on openrouter. Or you can even run a local model like Gemma for ~free once you have the hardware.

Despite the name, it isn't really coding specific. I presume "Claude cowork" is largely the same as Claude code but with a name that doesn't scare the hos.

I do recognize the frustration that Claude can't look up the API details through the web interface. Perhaps there's some security considerations and they didn't want to have the . model call arbitrary endpoints.

As far as profitability goes, we shall only know when these companies IPO or go bust, but Anthropic revenue growth is massive.

Their revenue growth is massive to the point of suspicion, especially since they admitted that they're using unaudited internal numbers that don't use GAAP. Why wouldn't they be using GAAP? The only explanation is that the GAAP numbers are pretty crappy. In fact, we know they're crappy because in court filings Anthropic stated they made 5 billion in total GAAP revenue between 2023 and the end of last year. The huge numbers you see are annualized projections that aren't representative of any actual revenue.

The huge numbers you see are annualized projections that aren't representative of any actual revenue.

Of course they are representative of some actual revenue. It's an annualized extrapolation of the past month of revenue. I don't deny that it doesn't mean that they made that $14B in the past year (but neither is anyone claiming it is), but it's not a random number as you seem to be suggesting. Huge growth in monthly income is real growth.