@iprayiam3's banner p

iprayiam3


				

				

				
3 followers   follows 0 users  
joined 2023 March 16 23:58:39 UTC

				

User ID: 2267

iprayiam3


				
				
				

				
3 followers   follows 0 users   joined 2023 March 16 23:58:39 UTC

					

No bio...


					

User ID: 2267

Last week there were a few ‘performance piece’ top posts that utilized AIslop to demonstrate a Goodharts law adjacent concept about the problem with effort posts as too simplistic a concept.

My reading was an uncoordinated but aligned demonstration that AI can produce the facsimile of a top post while entirely missing the point of a discussion board by and for people.

Unstated as it were, I am quite confident that the implied intention of both was to make transparent through the remaining dichotomy, the need for a return of the Bare Links Repository (not to be confused with unrelated calls for an unprecedented Bear/Lynx Repository).

(aside - The original BLR was likely only retired because of jealousy at its success by a sore mod team and sock puppetrous smear campaigns by one Julius Branson.)

Of course nobody wants uneffortful top posts but a BLR, is something entirely different.

Without hashing through its obvious differences from AIslop, tldr they are fundamentally mirror cases of ‘low effort’, where the former is earnest in its low effort and point outward, the latter is disguised and points inward.

The fact is, the BLR in its return would bring necessary life to this forum and counteract the slow momentum erosion the site has suffered since losing Reddits network effect; all while not compromising the rules.

The average LLM is more trustworthy than the average Twitter or Reddit commentator, though for now I would hope the Motte does better.

Again, my primary objection is not with the 'quality' of the AI output

Hey, I asked ChatGPT to do a vibes check on your comment. It pointed out these objections, which look sensible to me. Why ought I disregard them?

In other words, hey, can you talk to ChatGPT for me?

I do agree that it’s not how it’s conventionally used, but I think it’s better. Slop as a quality of writing commentary is slop of the gaps as LLMs improve. But the fundamental issue with nobody cares about your prompt engineering will remain

AI as a writing and editing tool is one thing (I still think it’s a double edged sword that leans negative, but that genie can’t be returned to the bottle so no use debating it). What is AIslop imo, is not the quality of the AI output, but the motion of:

“I asked AI x and here’s what it said…”

Where the human has contributed nothing more than the prompt, and the substance of the piece is what some LLM had to say about the prompt.

It’s slopped because it’s just been ladled out into your bowl without much more effort.

It’s not about the content, in fact that’s a red herring. It’s the ‘prompt’ What is being criticized is the implication that there’s something interesting or even contributory about having typed in a particular prompt and seen what comes out. Everyone can do that for themselves.

This kind of shit is all over Twitter. “I asked grok…” is the most tediously vacuous and self indulgent post possible.

Suppose I'm risking being late and waterlogged for a very demanding interview, and nearly guarantee I won't get the job, a job at which will save many lives if done well, and I am especially best qualified to do it right.

You've added in the factor of saving multiple lives instead of one life (at the cost of a nice suit), which is saying something different from the original. The original means to point out the moral obviousness of saving the child at very little real cost.

Yes I know, my point was in agreement with yours. That's why I said the original is an 'un'trolley problem. My point in describing some additional opportunity cost was exectly to illustrate that opportunity cost ruins the thought experiment.

And that's why it has very little to say about foreign aid or most other real world charitable activities that are abstracted from time and place. Because outside of immediate and present opportunities (like saving a drowning child right in front of you), opportunity cost does have to be considered.

And as you've agreed, it becomes different than the thought experiment, thus the thought experiment is no longer relevant.

And at the end of the day, this is the problem -- I haven't spent enough time reading literature responding to it, so hopefully this critique is already well documented -- this is an un-trolley problem. It's designed so that there's absolutely no opportunity cost. But then used to imply therefore, the opportunity cost of other scenarios are handwavable.

If I'm walking by a pond where there's a drowing child; in all likelihood, rescuing that child is the most valuable thing I can do in that moment, and the ruin of a 1k suit, that I'm already wearing is a sunk cost.

But this doesn't extend to prove that some future fungible time and money, there's a best thing to do and thus it is a moral imperative to have it done.

As soon as we add any actual opportunity cost to saving that child or ruining the suit, the parsimony of the aesop falls apart. Suppose I'm risking being late and waterlogged for a very demanding interview, and nearly guarantee I won't get the job, a job at which will save many lives if done well, and I am especially best qualified to do it right.

At that moment, it just becomes a regular trolley problem, with a little bit of forecasting mixed in, and there's nothing really to gleam from it.

If alternatively we take the most superficial lesson from the problem: We should help others when we are able, at a cost to ourselves, even when we aren't physically near them. Then sure! It's a great reminder. And it has just about nothing to say about government spending on foreign aid.

Yes but that’s why we had a bare links repository.

The volume of effort posts has been diminishing anyway.

Bare links and aislop are routes toward similar ends you described, but it’s not the outcome that solely makes them bad. It’s that AI slop is an inferior low effort entry point into a topic, for the reasons I described.

Now ideally we would have nothing but effortful and timely top posts, sure. But my point is that in the event that someone wants to juice the conversation without the effort post, the bare link is a far superior and more earnest, and less empty way to do so.

That said of course bare links as top posts are bad roughly on par (well…) with AIsloptopposting. But nobody is advocating for that. The people are asking for the repository back.

If we want an experiment, let’s have the BLR and an AISlopTopShop that is exactly the same, but for AI posts. Let’s see which produces more fruit, while keeping the rest of the CWR thread clean

In that (perhaps quite likely) eventuality then forums and social media as a concept are dead. AIs talking to AIs while people nod and curate them basically destroys the platonic purpose of social media.

This is like if you brought photographs to a painting club and claimed that it expressed what you which you could paint better than you can paint it yourself. Can you see how that might satisfy an itch you personally have but is thoroughly uninteresting to the painters there to paint?

Yes the existence of photographs and digital tools have fundamentally transformed art and even tradition methods can’t really exist outside of conversation with them to some extent. Yes AI has changed the nature of written discourse.

But no it’s not a good reason to dump AI slop and say ‘discuss…’

I am sure that, now having been convinced you will join me and the rest of the rising chorus to return the Bare Links Repository to the Motte

And this is why daesch and self made human are wrong to want AI slop here. The purpose of a human forum is subverted when top posts are AI generated text walls.

I say, we bring back the bare links repository as a palette cleanser to this new trend. It’s the opposite of ‘I asked ChatGPT and here’s what it said copied and pasted’.

It is brief where AIslop is verbose. It doesn’t dress itself up as original thought or even a point of view. It doesn’t claim to be effortful. Most of all, it points outward instead of inward toward an actual external idea, rather than reposting an ephemeral private chat.

Leaving behind the BLR was the greatest mistake of theMotte, nay of the rat sphere (standing among other mistakes like trans murder cults and founding an entire movement on fanfiction of kid books) and it is time we correct this blunder.

If this post gets 20 upvotes the mods will have no choice but to retvrn to the glory of the blr.

I find its response adequate. It is presented without any editing.

Copy pasting ai content is low effort nonsense. “I asked [AI] model…” is equivalent to “I had a dream where…”

In terms of being an interesting thing to talk about

But here's the problem. Before the fact that this AI can now displace all blue collar work, let's magically assume away some reason that won't be the case even for a super narrow slice of time. Such an AI will also collapse a great deal of the SaaS software business, which itself will be extremely economically disruptive. Once an AI is generally that capable, a lot of differentiated software becomes useless. Already I see many folks trying to sell me lipstick on top of the same 3 AI models. There's quickly decreasing utility in the UX if it's just a pass through to an agent.

no, I don't think we're at generic helper bot yet.

There is no world in which the rich let everyone starve because it would lead to an extreme collapse in demand and a deflationary spiral that would quickly bankrupt them.

and

will cost negligible amounts to feed, house and clothe first-world publics

and

But capitalism, or at least this current form of it, is going to end

seem at odds. The end of capitalism is also the end of needing consoomers to keep the rich rich. Even if cost of keeping them alive materially is driven down to zero, they are still going to compete for land, political voice, and social heirarchy, without providing anything back. It might be a peaceful transition, but as long as we are earthbound, it seems this scenario would make >a few million humans completely negative value. Transhumanism would further make the billions of humans nothing more than space wasters.

Sure, my point is that in my corner of the world, I’m already seeing signs of breakdown. The bubble is extremely fragile because it’s self consuming. I am stuck being asked to sign contracts and make decisions on software that I have no confidence will remain viable or a front runner by the time we get fully implemented.

The rate of change is already too fast and unpredictable to make business decisions on anymore. The landscape’s moving faster than sales cycles.

Well this circles my point. If we got to that then we have such a fast and powerful AI, software isn't even on our mind. But even as we get closer to that, the entire SaaS ecosystem will start to collapse. If fundamental functionality from UX to API communication relies on an LLM, unbounded by underlying code bases, niche software vendors won't have anything to differentiate themselves.

I think we are already starting to see some collapses in the space I work in where, each vendor is completing to be nothing but a chatgpt terminal wtih some lipstick

I'm being somewhat hyperbolic, typing this as I sit through yet another agent presentation. Basically every app is collating around an agent that automates everything within itself and / or across other tools it integrates with.

This is a mad rush to overturn UI into an LLM chat with flows being constantly chased by general browser agents that can do the same.

My greatest fear at the moment is that we will reach a stage where people start designing UI for AI agents and not humans anymore, and from computers become truly incomprehensible.

This is the Star Wars Future. Where computers become so incomprehensible people have basically unplugged, and we have to have special robots who talk to them for us. Not the worst possible eventuality.

AIAIAIAI

AI is going to maybe doom the world, but first it's going to doom the SaaS industry. We've all seen every company bumrush to build Generative AI into their tools whether it's useful or not.

But I am now dealing with the next wave. Everyone is pushing their AI agent. The thing is, there's an arm race between dedicated agents, and generic agents. Browser Agents can't yet make redundant specialized in tool agents, but it's not looking like a particularly long roadmap.

I think building / selling an AI agent offering is a fundamentally losing proposition. As I have been thrown several different demos over the last few weeks and manage a team to be buying some of these tools, my biggest perspective is that none of these companies can be trusted for long-term partnership.

These folks are building sandcastles on the beach while the tide comes in. It's not just a bubble, the pace of change has already gone faster than buying and implementation cycles.

The age of SaaS solutionsis going to cannibalize itself inside of the next 12 months even if AI stalls today.

Is ai going to kill us all?! (Asking for a friend)

It’s fair to call out that the media and the left have poisoned the well of useful conversation and nuance with their hysterics. But at the same time, I’m fucking tired of this fully generalizable hand wave.

This is not even a response, it relies on a series of logical leap that are entirely lazy deflections.

first it requires jumping from the truism that political corruption is inevitable on the whole, to the unfounded conclusion thus any given circumstance its therefore inevitable.

It’s no different than identity politics that jumps from a tenable claim that racism exists, to the ridiculous conclusion that any given scenario must have racism hiding in it, thus justifying any reaction.

It also requires treating all corruption as binary, then voila with a side of what aboutism,

Suddenly any concern about blatent corruption becomes dismissed as aesthetics, nativity, or even argued as actually virtuous since it above the board and therefore some kind of subversive transparency.

At the end of the day this schtick is played out. It’s just the opposite side of the coin as TDS, and just as brain rotted and empty rhetoric

I think in some cases it’s why the internet has become the hangout of choice.

What follows is speculative but, this feels to me as completely backwards causality. The internet didn't become a hangout because of the decline of 3rd spaces it's the opposite. People go to third spaces for things like

  • It's a schelling point for engagement
  • To get stimulation
  • To get information

The internet is much more efficient at all of these things. Unfortunately, efficient doesn't= better at scale, and there are a bunch of 2nd and 3rd order benefits that have been lost to the point that the system of community is worse off for it in many respects.

But you can't return people to libraries by getting rid of the homeless. You have to get rid of high-speed, wireless internet. The homeless are in these places because communities abandoned them, not the other way around.

We started the Hobbit, but my son got kind of scared when we got to the goblins (he's 6).

We've been working through the Chronicles of Narnia now. Treasure Island is in the queue.

So probably 2 hrs/day Am I ...Underestimating people's daily time commitments?

I have four kids, all very young so yeah. 2 hours a day to hobbies is not realistic for me. Although, if you're willing to count childrens books I read several hundred last year. I read with my kids almost every night. The olderones get chapter books and the younger ones Dr. Seuss and such.

What happened is kind of a sad story. Kulak, you see, unsuccessfully attempted the hock… and the rest is as it is.

To this point the two kulak posts I most rememeber from the Reddit days are one where he had a foaming diatribe against sales as evil and soul sucking, but a humorous unawareness that sales as a field was more than cold calling. He had worked one soul sucking sales jobs and blindly extrapolated a nonsensical point out of his narrow angst.

The other was an incoherent unworkable point of view about living on boats. Something like a suggestion that everyone should.

Either way, the guy has always been extremely narrow in his point of view and very bad at scaling or extrapolation, which is why the default to fed posting is his sweet spot