Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
Anyone else get annoyed (or at least roll your eyes), when someone says they "built" X with AI? It comes off to me as stolen credit, and ironically highlights (for me) the diminishing human input in the value chain.
For context, I am referring specifically to work-related scenarios, where X is not some output of design and engineering, but actually the result of a few prompts. In the past few weeks, the overwhelming majority of the people I work with seem to have completely and openly outsourced their jobs to Claude.
While I've been using ChatGPT since it first came out to help out with work (from research, to QA, to drafting documents and communications, etc), I've always been very careful to keep it's input in any final product discrete. Those who have left their AI outsourcing lazy and obvious (including tech leaders and C levels who should know better) like leaving em dashes and AI fluff in emails have previously been quietly mocked for their 'boomerish' obliviousness, or resented for the obvious lack of engagement.
We had a C-level a few months back write a corporate communication that was meant to be both encouraging and strategically informative about some top level changes, that was clear AI slop. It ended up being the exact opposite - engendering resentment of what was obviously being too lazy to tell us yourself.
But anyway, in the last 3 weeks it's flipped. Everyone gets onto every call proudly announcing the latest thing they 'built' in Claude, while pre-apologizing for errors in the work in progress as if they couldn't possibly have been reviewed.
These are dashboards, briefs, presentations, documents, all things that would have previously been expected to be completely error free, and important enough to make strategic decisions off of. Now it's all yeah, I had claude do my work and I haven't even checked it yet. but as a flex. because currently 'Use AI' is a key KPI. But it all seems so short sighted, it makes me sad.
I do this... In that I make AI tools for work, albeit I do test them and put in effort to make sure the quality is good. I always feel kind of embarassed for saying this since I sound like a linkedin golem. But anyway...
There's a phase transition I think from 'we slowly and expensively make a tool or an output that is supposed to work totally right the first time but still doesn't because bugs and errors are a thing and so we have to go back and slowly fix it' to 'fast, disposable code that is way cheaper but buggier, less well-thought out but more easily fixed and altered'. This is a case of old standards being applied to a new paradigm. And there are people using the new tools in the wrong way too. But this is the new way to do things and it's how more and more things will be done, just like how water flows downstream. Artisanal code will be like artisanal clothing.
More options
Context Copy link
More options
Context Copy link