Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
Anyone else get annoyed (or at least roll your eyes), when someone says they "built" X with AI? It comes off to me as stolen credit, and ironically highlights (for me) the diminishing human input in the value chain.
For context, I am referring specifically to work-related scenarios, where X is not some output of design and engineering, but actually the result of a few prompts. In the past few weeks, the overwhelming majority of the people I work with seem to have completely and openly outsourced their jobs to Claude.
While I've been using ChatGPT since it first came out to help out with work (from research, to QA, to drafting documents and communications, etc), I've always been very careful to keep it's input in any final product discrete. Those who have left their AI outsourcing lazy and obvious (including tech leaders and C levels who should know better) like leaving em dashes and AI fluff in emails have previously been quietly mocked for their 'boomerish' obliviousness, or resented for the obvious lack of engagement.
We had a C-level a few months back write a corporate communication that was meant to be both encouraging and strategically informative about some top level changes, that was clear AI slop. It ended up being the exact opposite - engendering resentment of what was obviously being too lazy to tell us yourself.
But anyway, in the last 3 weeks it's flipped. Everyone gets onto every call proudly announcing the latest thing they 'built' in Claude, while pre-apologizing for errors in the work in progress as if they couldn't possibly have been reviewed.
These are dashboards, briefs, presentations, documents, all things that would have previously been expected to be completely error free, and important enough to make strategic decisions off of. Now it's all yeah, I had claude do my work and I haven't even checked it yet. but as a flex. because currently 'Use AI' is a key KPI. But it all seems so short sighted, it makes me sad.
Not at all. It's a new tool, they used the tool to do a thing.
Do you get annoyed when someone says they "built" a PC? They didn't make a single part, they probably don't even know what a "PCB" is! They just slotted it together like Lego.
Do you get annoyed when someone says they "took" a picture even though they 1) have never developed a negative in their life and 2) all phone cameras these days run a shit load of post processing on the photo after it's taken? They barely did anything!
As they should be! Laziness is bad.
This person is clearly not a good leader, AI or not.
This is pathetic, one of the best parts of AI is that it will relentlessly hunt for errors. It'll happily read the same report 100 times looking for typos and never get bored.
This is retarded and bad
This is retarded and bad
Building a PC is a technical task apparently beyond the median human being. What IS worthy of annoyance would be someone talking about how they built a PC by buying an Alienware gaming laptop.
The same is true for creating half-decent representative art, compiling a (wrong!) hello-world level C++ or Rust program, writing a coherent and consistent 20k word story with a meaningful theme, and reading a 100-page narrative document and recalling mid-grade detail. I think that leaves too much on the table.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link