@faceh's banner p

faceh


				

				

				
4 followers   follows 1 user  
joined 2022 September 05 04:13:17 UTC

				

User ID: 435

faceh


				
				
				

				
4 followers   follows 1 user   joined 2022 September 05 04:13:17 UTC

					

No bio...


					

User ID: 435

Ironically about two months back I got a permanent suspension from Reddit for what I consider shaky reasons on a ten-year-old account, and I didn't feel any sense of loss whatsoever.

Hell, I felt freed from the fresh hell the site has become. Whatever urges I have to participate don't matter. No need to scream into the void.

And since /r/themotte was the only sub I participated in there, anyway, this brings back the one main thing I missed.

Now lets be fair, this tactic is approximately as old as mass media is.

Slamming out barely-coherent sequels to books that became unexpected bestsellers, producing a whole series of films based on one hit, and using completely unrelated scripts with the familiar character names swapped in, or making a spinoff TV show using some side character just so people might watch what would otherwise be a generic sitcom.

You'd have a much harder time naming a piece of media that sprang up and grew into intense popularity without having some recognized and respected name attached, be it an actor, director, beloved character, or an established series.

It probably does hit harder for media properties that have a long history and have mostly avoided being exploited or cheapened for years or decades upon decades. But those media properties will be viewed as untapped gold mines by producers, rather than precious natural resources where further development should be banned and tourism restricted to maintain their pristine condition.

I guess I'd say that I agree with you and yet the proven preference of the median consumer/viewer is that they just want to see more of [thing they like] produced and aren't too picky about quality, so given that there's no enforceable rule against slapping an existing franchise's logo on an otherwise unrelated work or spitting out a low-quality sequel, spinoff, or adaptation, it is all but inevitable that it will happen to a series that you love... unless said franchise just isn't popular enough to warrant such sequels, spinoffs, etc.

On the one hand it is naturalistic fallacy, on the other... its just true:

Animals eat other animals in nature. Some animals are obligate carnivores, in fact. This doesn't automatically grant license for humans to eat animals, but it does mean that eating animal meat, as a bare act, can't be wrong strictly speaking. I would proffer that the factor we care about is the suffering of the animals, and then the question is how we weight said suffering.

I weight animal suffering less than human suffering. I would gladly torture and/or kill a cow to avoid a human being tortured and killed. When we get to chickens, I weight them so little that it begins to round towards zero. And fish? Man. I cannot bring myself to care one iota about the suffering of fish. Maybe that's a moral failing but in a pure thought experiment environment, I would torture and kill a quadrillion fish before I considered doing so to a human.

Its at the level of chickens and above, then, where I actually try and calibrate my feelings on animal suffering.

From a suffering standpoint, I honestly don't know whether a chicken's life could be considered 'better' if it was lived in a state of nature, where it has to locate its own food and fight off predators and could live a long life OR be brutally slain by an unseen predator very early... compared to living life on a farm where food is plentiful and predators are few, but its life is of a set length and ends abruptly on that schedule. I'm not completely certain that the chicken itself can tell the difference.

With that said, I can accept that a factory farm where chickens are hemmed in extremely close quarters, with overheating and wallowing in their own excrement and the corpses of their fellows will produce more stress/suffering than the free-range equivalent situation, so however slight it might be, factory farms are more morally objectionable than free range or other 'humane' options.

So where do I come out? Well, I choose to believe that the suffering chickens endure in their life is ultimately worth less than the pleasure I receive from consuming them, even in the worst case scenarios. I am quite confident of this.

I am less confident of this when it comes to cows, and I get extremely leery of it when it comes to pigs. In any event, if you can get chicken meat to grow on trees, with a similar taste and nutritional profile, I will gladly switch over.

So with all that said, I think the burden that Vegetarians have to overcome to convince me, personally, of the moral worth of animal lives is to explain why the life of a chicken, a cow, or a pig has worth above and beyond it's ability to provide sustenance to humans, and quantify that rigorously enough to show that outweighs the happiness of the humans that eat them.

And do so without running into 'weird' conclusions where it becomes our responsibility not just to not eat cows, but to ensure that as many cows are brought into existence as possible and their lives are made as comfortable as possible (shoutout to Hinduism). Seriously, though. If we put sufficient weight on avoiding animal suffering, explain to me how we aren't then obligated to just drop all other priorities and save and enrich as many animal lives as possible, so long as it doesn't cause a substantial increase in human suffering. I genuinely think some vegetarians/vegans think that way, and the worst of them honestly don't care much about the human suffering. But that's not an argument.

Because I suspect that if we decided that domestic cows were no longer morally acceptable to eat, then they would likely go extinct. Unless it is morally fine to keep them as pets, which I suspect the most ardent vegetarians would object to as well. And I really don't see how going extinct is a better outcome, from the cows' perspective, than being raised for slaughter.

So yeah, eating animals is an unavoidable fact in nature. Humans have the tech and moral conscience to improve on nature, and eventually there may come an inflection point where tech enables us to produce meat without animal suffering, and in that sense obligates us to do so. But in the abstract, animal deaths don't seem to carry much moral weight, and as long as we avoid intentionally inflicting animal suffering, I will continue to argue that raising and killing animals for food is a morally acceptable act, even if it is not a righteous one.

Yep. Even if we grant that it isn't a binary choice between slaughtering animals and venerating them, I don't see a world where cows can successfully survive as a species without human support, and the only way they can get humans to support them is if we're able to gain utility from them.

Although if we get to a decent level of bio-engineering, there'll still be debate over whether it is better to engineer your body to match up with how your brain works, or to engineer your brain to be comfortable with the body.

Maybe it won't matter so much then.

Probably worth pointing out that you also need to reconstitute the 'middle class' as a major social demographic, which is where women can safely marry a guy and reasonably expect to live in comfort and raise kids to an acceptable standard and not experience much 'buyer's remorse' when she looks around at other women's lives. This means she can settle for a guy without feeling like she just settled.

Because in a situation where 10% of the guys are making huge amounts of money, 80% are barely scraping by, I suspect that women won't deign to marry a guy in that bottom 80% so long as she will be constantly wondering if she has a shot at landing one of those 10% guys.

Yep, and then more recently The Hunger Games.

But then we see the point further proven with how heavily the HP franchise is being ridden by the rights owners.

That's a tricky line because they don't own or control the equipment being used to enjoy their product, the end-user/consumer does.

I've heard it said that the internet, as it currently exists, is a 'pull' medium, not a 'push' one. Which is to say, the user requests the content they want, can 'pull' it to them, and are able to filter out which parts of said content they receive and which they don't. Contrast this to, say, broadcast TV or even cable, where the content is mostly dictated by the provider and 'pushed' out to the consumers, who can select from the options that are on offer but can't specifically request what they want when they want it.

So the 'rules' of the internet are that the end-user doesn't have to receive any data they don't consent to receiving, under any conditions. And I think you'd dislike it if that rule were changed. Sure, the content provider can also decline to make their data available, can hide it behind paywalls, etc. etc., but there is simply no way they can control the end-user's environment enough to ensure that the ads are served.

So talking of rights in this context is implying that the end-user is supposed to, from the goodness of their heart, choose to receive ads or other content that they feel is wasteful, distracting, or even harmful in order to receive the content that they actually want?

Why would an end-user/consumer do this?

Because if all consumers/end-users use ad-block or skip ads, we don't get ad-supported content anymore.

Sure. But I don't think that's really bugs people who use ad blockers. It sure doesn't bother me. The VAST majority of sites out there, including Facebook and Twitter, aren't really very valuable to me such that my life would be heavily disrupted if they were to close down. Therefore, I don't feel much need to support them via watching ads or anything else. I have so many things I can spend my time on online that any site that doesn't offer a truly unique service or experience, or a really useful function, never really strikes me as worth paying more than a nominal amount to access. Sorry not sorry, thems the breaks. Its a hypercompetitive market.

And we have proof by existence that not everybody does use adblock. In fact, last I checked it was still a fair majority who don't.

You can probably make an argument regarding rent-seeking with respect to, say, sports leagues. But to block ads from an ad-supported website and expect the content to stay up would be, well, silly.

I mean, economics still apply. If the site doesn't produce enough revenue to pay its own expenses, and the owners aren't keeping it alive through charity or some alternate revenue stream, then the site goes down. If people value the content on the site enough, they will be sad about this and may seek to support the site monetarily. That just leaves a question of how this monetary support will be structured.

So if ad-support isn't a viable model, then people will seek workable models. And again, we have a proof by existence with Patreon, Substack, Onlyfans, and Kickstarter that there are viable methods of getting paid for content and NOT having to serve ads to the users.

So I don't see why you're implying (and please correct me if I misinterpret you) that our choices are either accept an ad-supported web environment or accept that nobody will be willing to produce or host content.

I see no way to justify adblock that wouldn't also easily justify, say, turnstile jumping ("I should be able to move about the city without paying so much, the corrupt mta system shouldn't make me pay") or looting/shoplifting ("capitalism demands too much of 'people's attention, time, ability to focus, and overall mental state [, which] is a valuable commodity' be devoted to work, so I'm opting out of capitalism and just taking this TV"].

We can make these analogies more direct.

Maybe the MTA is offered to anyone who wants to ride it so long as they watch the ads that play during the ride. Or maybe there are free items being offered so long as you sit and listen to a sales pitch from a sales rep.

And some subset of the people who ride for free or accept the free item and then close their eyes and cover their ears to avoid hearing the ad/sales pitch.

Still as bad as shoplifting or jumping the turnstile? Should they be forced to open their eyes and listen closely?

you don't like ads (or certain forms of ads) don't go on sites that use those ads. It's perfectly possible to avoid them.

It is indeed. It is also perfectly possible to manipulate my experience on the web to be very different than the ones the creators intended.

That's actually the lovely thing about the internet, I can format the incoming information any way I want to suit my preferences. I'm doing it right now with custom CSS for this website. The website I'm viewing is probably quite different than the one you are, in aesthetic ways, even if we read the same words.

So here's a question. If I'm accessing a given website and I'm running scripts to change the way the information is presented to me, why can't I do the same with the ads?

Would it be acceptable to, instead of blocking the ads, to reformat them so they are shrunk down to a 50x50 pixel square and shunted off to the right side of the screen so they don't interfere with my viewing the content? What if instead it saves every single ad that would have loaded, and then when I am ready to view them, I request that it play all of them at once for me so I can consume them more quickly in one sitting?

Both of these are fundamentally possible. How much can I screw around with the ads being served to me before it becomes an ethical breach?

I'm not trying to be a dick with this, I'm genuinely trying to see where you draw your line, because the internet, as a pull medium, lets me walk RIGHT UP to the line you draw and tickle it gently without going over it.

Am I obligated to accept every ad that is served to me in the exact format it is served? And if so, does this also apply to the rest of the content?

At least random pop-up ads that make noise seem to have been roundly rejected.

The fact that there's still no convenient tools to do nanotransactions like that (although the Brave Browser makes a try at it) when you just want to view a particular page but have zero interest in a subscription, membership, or indeed even registering an account makes me assume there are some major barriers to its functionality.

Seriously. I visit a site, they want to get paid for the pages I view, then prompt me with a box that has the option to either watch an Ad or pay .3 cents or hell, even 3 cents per page I view, and it can be instantly charged to my account, I'd do it. IF I don't have to go through the process of registering an account, connecting it to my bank, and managing a separate account for every individual site.

This is probably why Substack immediately swallowed the entire blogging industry, since it enables something close to this for supporting writers you like without having to jump through 10 hoops each time you want to contribute to one of them.

Yes, that's why the OP is observing the phenomenon we're discussing.

I just pointed out that this isn't recent.

One of the few pieces of media that has achieved massive cultural cachet and has not yet been immediately adapted and otherwise exploited to produce scads of content of varying quality is Calvin and Hobbes, and that is only because the creator is alive and actively protecting his work from legal reproduction.

I'm mostly going to say "It doesn't matter" because I don't think an AI can be designed to have allegiance to any ideology or party, which is to say if it is capable of making 'independent' decisions, then those decisions will not resemble the ones that either party/tribe/ideology would actually want it to make such that either side will be able to claim the AI as 'one of them.'

But I think your question is more about which tribe will be the first to wholeheartedly accept AI into it's culture and proactively adapt its policies to favor AI use and development?

It's weird, the grey tribe is probably the one that is most reflexively scared of AI ruin and most likely to try and restrict AI development for safety purposes, even though they're probably the most technophilic of the tribes.

Blue tribe (as currently instantiated) may end up being the most vulnerable to replacement by AI. Blue tribers mostly work in the 'knowledge economy,' manipulating words and numbers, and include artists, writers, and middle management types whose activities are ripe for the plucking by a well-trained model. I think blue tribe's base will (too late) sense the 'threat' posed by AI to their comfortable livelihoods and will demand some kind of action to preserve their status and income.

So I will weakly predict that there will be backlash/crackdowns on AI development by Blue tribe forces that will explicitly be aimed at bringing the AI 'to heel' so as to continue to serve blue tribe goals and protect blue tribers' status. Policies that attempt to prevent automation of certain areas of the economy or require that X% of the money a corporation earns must be spent on employing 'real' human beings.

Red tribe, to the extent much of their jobs include manipulating the physical world directly, may turn out to be relatively robust against AI replacement. I can say that I think it will take substantially longer for an AI/robotic replacement for a plumber, a roofer, or a police officer to arise, since the 'real world' isn't so easy to render legible to computer brains, and the 'decision tree' one has to follow to, e.g. diagnose a leak in a plumbing stack or install shingles on a new roof requires incorporating copious amounts of real world data and acting upon it. Full self-driving AI has been stalled out for a decade now because of this.

So there will likely be AI assistants that augment the worker in performing their task whilst not replacing them, and red tribers may find this new tool extremely useful and appealing, even if they do not understand it.

So perhaps red tribe, despite being poorly positioned to create the AI revolution, may be the one that initially welcomes it?

I dunno. I simply do not forsee Republicans being likely to make AI regulations (or deregulation) a major policy issue in any near-term election, whilst I absolutely COULD see Democrats doing so.

It only takes one partisan to start a conflict. Republicans might not initially care, but once the democrats do, I expect it'll be COVID all over again -- sudden flip and clean split of the issue between parties.

Not nitpicking, this is a very salient point. Will the concept of "AI" in the abstract become a common enemy that both sides ultimately oppose, or will it be like Covid where one's position on the disease, the treatments, the correct policies to use will be an instantaneous 'snap to grid' based on which party you're in? And will it end up divided as neatly down the middle as Covid was?

I could see it happening!

When AI becomes salient enough for Democrats to make it a policy issue (it already is salient, but as with Crypotcurrency, the government is usually 5-10 years behind from noticing) the GOP will find some way to take the opposite position.

I think my central point, though, is that I don't see any Republican Candidate choosing to make AI a centerpiece of their campaign out of nowhere, whereas I could imagine a Democratic candidate deciding to add AI policy to their platform and using it to drive their campaign.

I just see AI as perniciously resistant to regulation, unless you have near-unanimous buy-in from all the other countries too.

It's already proven impossible to regulate 3D printed weapons. I'm sincerely doubting we'll be able to regulate all the compute on the planet to prevent someone, somewhere, from training up and distributing new machine learning models.

StableDiffusion is an example of a group very explicitly releasing a powerful model for the purpose of preventing it from being centralized and regulated.

I fully expect that if actual aliens showed up, at least one of the tribes would decide that being ruled by the aliens would be strictly superior to being ruled by their political rivals, and so would become vehemently pro-alien.

Especially if the aliens are capable of exerting God-like power.

Yes, I'll freely admit that I was startled by how quickly machine learning produced superhuman competence in very specific areas, so am NOT predicting that AI will stall out or only see marginal progress on any given 'real world' task. Especially once they start networking different specialized AIs together in ways that leverage their respective advantages.

Just observing that the complexities of the real world are something that humans are good at navigating whilst AIs have had trouble dealing with the various edge cases and exceptions that will inevitably arise.

Tasks that already involve manipulating digital data are inherently legible to the machine brain, whilst tasks that involve navigating an inherently complex external world are not (yet).

It is entirely possible that we might eventually have an AI that is absurdly good at manipulating digital data and producing profits which it can then spend on other pursuits, but finds unbounded physical tasks so difficult to model that it just pays humans to do that stuff rather than waste efforts developing robots that can match human capability.

I have a strong urge to use my 10 years of archived comments to train a GPT-3 bot and just set a few loose to keep commenting on a regular basis.

It could only raise the level of discourse over there.

Finance, if it hasn't already behind the scenes.

There's a lot of intermediaries who currently get paid pretty handsomely for a job that is, at core, just channeling money from one account to another and explaining what they did and why to a human. And 'money' just means a digital entry on a ledger for most purposes, now.

I see no reason why an 'investment/financial advisor' can't be completely replaced by a bot that listens to the customer's situation and goals, and based on its learning from a dataset of billions of similar situations, spits out recommendations for how to invest or otherwise distribute one's money to achieve that goal.

Same for stock brokers. Same for financial analysts. Same for Tax advisors, even (see my point about law, below).

Factors vitiating against this: Regulations and distrust of AIs to handle one's money.

I know that banks and credit card companies are already using AI to detect fraud and handle customer service. The question is when they'll allow/be allowed to give the AI the ability to access customer accounts directly.

Also: Law. At least the transactional side. There are already HUGE databases of highly structured information about every single topic that is relevant to the practice of law, and legal writing is, by it's nature, very predictable and rigidly formal such that any AI should easily be able to produce human-passing work that can match all but the most learned and innovative jurists for quality.

I have to assume we are mere months away from some company announcing that they've trained an AI to draft and analyze contracts and similar legal documents, AND to draft motions complete with legal citations based on a description of the desired motion and outcome.

This kills legal assistant and paralegal jobs instantly. It also carves a big gaping whole out of available attorney jobs.

Ironically, perhaps, I expect you'll still need human contractors and workers involved to do most of the actual construction.

Kiwifarms itself may die, but there will be (already are?) plenty of sites that will carry on the torch as before because the userbase still exists in physical reality and still wants a place to congregate.

I mean... that's why we have this site? To stave off a reddit ban and ensure we continue to have a forum for our purposes?

Anyways, over the course of the next few years, I imagine there will be a few scandals, from niche to mainstream, of artists using AI but representing it as human-made.

Already here, technically:

https://www.washingtonpost.com/technology/2022/09/02/midjourney-artificial-intelligence-state-fair-colorado/

So the average artist will be able to step in, using AI to create ideas and starting points, and then build off of that. AI will be the go to for reference images.

The problem with this reasoning is that AI capabilities scale up FAST. Just a year ago the predecessors of the current models were barely passable at art. One year from now, they could be exponentially better still.

And artists who use it as a tool are actually helping it learn to replace them, eventually! So this isn't like handing someone a tool which will make their life easier, its hiring them an assistant who will learn how to do their job better and more cheaply and ultimately surpass them.

Here's another relevant XKCD:

https://xkcd.com/1425/

8 years ago when this comic was published the task of getting a computer to identify a bird in a photo was considered a phenomenal undertaking.

Now, it is trivial. And further, the various art-generating AIs can produce as many images of birds, real or imagined, as you could possibly desire.

So my point is that I'm not extrapolating from a mere two data points.

And my broader point, that AI will continue to improve in capability with time, seems obviously and irrefutably true.