site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 1529 results for

domain:questioner.substack.com

I know Copilot is used a bit here. They mostly use it to look up things and write tiny scripts.

If there was a rail line running along every interstate, it probably would be.

Well. Maybe not? You’d still need similar personnel to get supply from the railheads to each Wal-mart and gas station. We just load the trucks up earlier.

What’s a current example of “A Thing” that’s limited to some cities? I figure with mass, interconnected culture, everything unifies pretty fast.

Sailing? No way. Sails are barely even viable as a hybrid solution. Sure, tech will improve, but a full sailboat is not going to be competitive on most routes.

…which is where my weird prediction comes in. Carbon capture is going to improve a lot by 2050. We’ll be using the most solar- or wind-friendly places in the U.S. to reassemble hydrocarbons, which in turn will be burned in our most energy-dense vehicles. Honestly, this might not be that weird. I don’t really know the state of biogas or whatever they’re calling it.

fraud

How is motherhood fraud supposed to work? Especially in China, world leaders in personal surveillance?

It just seems odd to declare AI dead based off this data.

I may have miscommunicated here. I don't think it's dead. I think it'll be useful on a much longer time horizon than was predicted, and not in a way that we expected. The slope of enlightenment is next.

I'm definitely on the bullish side, but it's quite ridiculous. I just have a script that I run every morning to burn up a bunch of tokens so I meet whatever metric they're tracking. (I did use AI to write that script!)

Finally, a goal that can Goodhart itself.

Ironically, I think our coders are least likely to use it. Integration is limited to a GPT wrapper, which is all well and good for people revising their emails, but not so much for serious programming. I suspect it’s an export compliance thing.

Often AI deals require a promise to spend X tokens over Y time period. It’s like promises to spend a certain amount of money on a company’s services without specifying the services to be bought. So if the buyer is under the spend count, they encourage people to use more tokens.

Since most (successful?) adopters get their tools by licensing from GPT or Claude, I would guess it’s an attempt to show return on that investment.

Yeah, it screams KPI to me.

We’re techy enough that our investors want to see it, so by God, we’re going to pay someone for their model. Then you’ve got to show that you’re actually using it.

Pink collar- low skill office work, named because it customarily has been done by women. Eg secretaries.

And secretaries and call centers and the like will be the jobs actually hardest hit.

Judging by the report, the main vector is “wouldn’t it be cool if we could achieve 10% more with the same personnel?” That’s more realistic than blockchain, but it’s not explosive. It’s not overcoming a longstanding cliff in the same way as, say, telecom.

Well. Maybe in fields like digital art and voice acting. It’s not a coincidence that those are seeing the most Luddics concerned citizens. But how many companies can directly turn voice synth into revenue?

Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L.

If this study is trustworthy, the promise of AI appears to be less concrete and less imminent than many would hope or fear.

This seems like an extremely odd metric to support the argument that you are making.

At the very least, to use the 5% success rate to understand AI's revolutionary potential, we need to know what the average value unlocked in those 5% of successes is, and the average cost across the whole dataset. If the costs are minimal, and the returns are 100x costs for the successes, then even if only 5% succeed every single company should be making that bet.

On top of that, what's the timeline function? When were these programs launched? How long have they been going on? Are the older programs more successful than the the newer ones? If most of the 5% are over a year old, while most of the 95% are less than a year old, we might be judging unripe tomatoes here.

Then, add to that, there's value in having institutional knowledge and expertise about AI. By having employees who understand AI, even if the pilot program fail, they'll see opportunities to implement it in the future and understand how to integrate it into their workflow.

It just seems odd to declare AI dead based off this data.

I can believe people using AI for different things are having very different experiences and each reporting their impressions accurately.

Partially, but there is also a honeymoon phase and a phenomenon where people feel more productive but have mostly just shifted what they do, not increased their actual productivity.

Perhaps this is something that will pass with increased experience with the tools but it has not been my experience with the people i manage nor for my friends in similar managerial roles. It could of course be a combination of the above as well. Maybe the models just need to get a bit better and people need experience with those models. Who knows?

To me it seems highly specific where AI actually is a meaningful productivity booster for programming. It should be clear though that for these things it is very valuable.

I would be more worried for areas where things don't actually have to be "correct" (for quality or legal reasons), like visual art generation. Even there I imagine things will mostly affect the things liable to be (or already has been) outsourced.

Have you noticed a difference in quality of analysis of mature code-bases versus its ability to make changes/additions to them? The consensus on our team so far seems to be that its analysis is significantly better than its generation, though how much of that is the quality of the AI versus the quality of our prompting is rather up in the air.

This makes sense to me. The impact of the spreadsheet has been huge, but it took a long time to settle in everywhere, and the accounting department still exists, even though the guys "running the numbers" don't anymore. There are still plenty of operational systems running on DOS or OS/2: if it isn't broken, don't fix it, and things take time to replace.

What’s the base rate?

If I saw “rapid revenue acceleration” on a mass email from my upper management, I’d expect roughly zero change in my day to day experience. 95% “little to no impact” is right there in Lizardman territory.

Press releases have the same incentives whether or not a technology (or policy, or reorg, or consent decree, or…) is actually going to benefit me. Companies compete on hype, and so long as AI is a Schelling point, we are basically obligated to mention it. That’s not evidence that the hype is real, or even that management believes it’s real. Just that it’s an accepted signal of agility and awareness.

The article points out a number of stumbling blocks. Centralizing adoption. Funding marketing instead of back-office optimizations. Rolling your own AI. Companies which avoided these were a lot more likely to see actual revenue improvements.

I can say that my company probably stalled out on the second one. I’m in a building full of programmers, but the even the most AI-motivated are doing more with Copilot at home than with the company’s GPT wrapper. There’s no pipeline for integrated programming tools. Given industry-specific concerns about data, there might never be!

But that means we haven’t reached the top of an adoption curve. If the state of the art never advanced, we could still get value just from catching up. That leaves me reluctant to wave away the underlying technology.

Well, that's an interesting questions - e.g. banks use deposits to issue loans, but what Coinbase is doing with its 2 mln bitcoin deposits? This is a valid question but very different from the assertion @Tree was proposing. Looking at https://data.bitcoinity.org/markets/volume/30d?c=e&t=b the trading volume in BTC is in around tens of thousands of coins traded daily, which is of course small part of overall bitcoin mass but still a respectable volume as it seems to me. Over a longer period of months, the volume is in millions, so I don't think it'd be right to assume the BTC market is so illiquid that the prices are substantially caused by lack of liquidity. Of course, I am not an economist, so if somebody more qualified could point out an error in this assessment, I'd be thankful, but that's what it appears to be to me.

Yea I remember a fair amount of this clustered west of Skid Row in a neighborhood called the Toy District I think. Not just toys though, just about anything that can be mass imported from Asia wholesale can be found there now.

https://en.wikipedia.org/wiki/Toy_District,_Los_Angeles

The article discusses the economy of the area a little bit. Looks like this: https://maps.app.goo.gl/1XyjCRMusLzY5SGBA

From my company's perspective, a lot of AI use is limited by policy. We aren't allowed to provide proprietary information. Company policy is to, "only enter things that we wouldn't mind going viral on the Internet." This really limits anything I could do with it. At most I can use it as a Mrs. Manners guide. The coders are able to use it more, which frees them up to play Madden for longer or attend more meetings with the Product Owner.

My experience as a senior software engineer is that I am not worried about AI coming for my job any time soon. My impression (somewhat bolstered by the article) is that AI is most efficient when it is starting from scratch and runs into issues when attempting to integrate into existing workflows. I tell the AI to write unit tests and it fails to do all the mocking required because it doesn't really understand the code flow. I ask it to implement a feature or some flow and it hallucinates symbols that don't exist (enum values, object properties, etc). It will straight up try to lie to me about how certain language features work. It's best utilized where there is some very specific monotonous change I need to make across a variety of files. Even then it sometimes can't resist making a bunch of unrelated changes along the way. I believe that if you are a greenfield-ish startup writing a ton of boilerplate to get your app off the ground, AI is probably great. If you are a mature product that needs to make very targeted changes requiring domain knowledge AI is much less helpful.

I can believe people using AI for different things are having very different experiences and each reporting their impressions accurately.

I actually somewhat like bitcoin in the long term as a store of value. It is the first mover in terms of creating artificial scarcity, and has surprisingly few weaknesses in terms of preserving that scarcity. Contrast that to something like gold where changes in mining output or industrial demand can impose external price pressures outside of the supply/demand for a safe haven.

That being said, I think the rest of crypto is arguably the largest bubble in human history. I don't see any real value provided by the chains that try to act as both a platform and as a currency. And i expect that at some point those will all come crashing down. And when this happens, I expect that bitcoin will take a major hit. I doubt it will be a lethal blow, but I could easily see a >50% loss happening. That's a lot of risk if you are trying to preserve value.

Jesus is God (the Father) is not in the Nicene Creed either. Nathan Jacobs addresses the issue here: https://nathanajacobs.substack.com/p/does-jesus-claim-to-be-god

There was not even a proposed vector for value with Blockchain most of the time. AI is very different.

Why would it decrease pink collar work? Or do you mean the administrative overhang? But why would that hit pink collar stuff more than anything else?

The big problem for now is some form of data validation. There are a lot of customer support jobs that can be 99.9% done by AI, but aren’t because of the tail risk that some combination of words will reveal the wrong customer’s information, will allow someone into the account without the right checks, etc, plus general reputational risk like the fact that countries and states are now accusing Facebook LLMs of flirting with minors or whatever. All the stuff that LLM red-teaming groups or ChatGPT jailbreak communities do, essentially. You can fire a bad employee as legal liability, but if its your LLM and the foundation model provider has a big fat liability disclaimer in its contract (which it will), you’re more fucked than you’d be if an employee had just gone rogue.

The eventual solution to this - as with self-driving cars - is to improve accuracy and consistency (by running things through multiple LLMs, including prompt security ones like those slowly coming online through Amazon bedrock and other platforms) until the risks are negligible and therefore insurance costs fall below the $50m a year a big corporation is paying for call centers.

But it will take a few more months, maybe a couple of years, sure.