The lack of response looks extremely bad when we consider how much aid has been poured into Ukraine and Palestine, AND 10's of thousands of refugees have been pulled out of other country's (such as Haiti's) disaster areas and housed on U.S. Soil.
They should have C-130's airdropping supplies already. As it stands, Kamala hasn't even sent a tweet.
There should already be promises to put a couple billion or so dollars into rebuilding (i.e. what they claim they'll do for Ukraine once the war ends).
If the U.S. government can't even muster up the same kind of resolve and resources to rescue U.S. Citizens on U.S. soil due to a natural disaster, then unironically, they do not deserve to rule, full stop.
This is why its such a horrible idea to remove all the slack from the system to spend on relatively frivolities. When the need arises to spend your reserves due to an actual unexpected disaster, you don't have the change to spare.
No, I get that.
Its just every epicycle they have to add makes it less credible to me.
It is one thing to point to some guy who inherited wealth built on the backs of actual slaves or exploitation, and say that maybe he doesn't deserve everything he has.
Quite another to point at somebody who just happened to be born into a civilization that was built in part on the back of slaves and through exploitation of weaker neighbors, and claim that just because his ancestors bled, died, and labored to build a nation so nice that everybody wants to move there he doesn't get to be proud of himself... and he also should feel guilt for all the people that were exploited to build the nation (which includes his ancestors, mind!).
I've said it elsewhere, the lesson of politics since about 2010 is "identity politics and racial grievances are a great way to get others to do what you want and give you their stuff."
Of course the end state of this is leftists revolting against nature. It always is. Some nations were bequeathed huge stores of natural bounty, some were not, and this determined their future courses to some huge degree. The only way to correct for this is to move that natural bounty around until every place on earth can obtain some kind of parity.
As stated, be really nice if there was a sound case for why this won't change in the near future.
The jump to where we are was sudden and surprising, the next one could be as well.
To sum it up, to train superhuman performance you need superhumanly good data.
It isn't clear we need superhumanly good data. Humans can make novel discoveries if they have a sufficiently good understanding of existing data and sufficiently good mental horsepower to use that data, i.e. extrapolate from their set of 'training data' and accurately test those extrapolations to discover new, useful data.
It seems like we just need to get an AI to approximately Von Neumann level and if it starts making good contributions to various fields at that point we can have it solve problems that hold up AI development. We're seeing hints of this now with Alphafold 3 and AlphaProteo.
Right now, the one thing that appears to be a hard hurdle for AIs are navigating real world environments, where there is far more chaos and variables that don't interact with each other linearly.
It can be difficult to see a new true innovation coming when every single company starts slapping "AI Powered!" as a feature on their products, but I think the case that AI will make surprising leaps in the next few years is stronger than it will inexplicably stagnate.
Other countries didn't succeed in becoming first world nations because Canada/America/the West's success is based on their exploitation. Simple.
Doesn't really work when you can see how Japan recover from nukes and occupation, or Singapore vaulting to first World status and becoming a beacon of civilization, with little apparent exploitation of other nations.
Works even less when you notice that places like Rhodesia and South America were pretty much first-world or close second-world countries right up until the Western influence withdrew.
Adding to the confusion, only the guilt is transmitted forward through time. For some reason, none of the credit for building a first world country follows.
The same people saying "You must feel bad for the horrible things your ancestors did" will not even skip a beat before saying "you can't feel pride for the great things your ancestors achieved." So conveniently you can't assume any credit for creating a successful nation, but you get to feel blame for what happened to any minorities or natives who suffered during its creation, just in case you thought those two factors might balance out the ledger.
I am utterly unclear as to the mechanism that allows blame to propagate forward through time and generations but doesn't allow credit and pride to propagate as well.
It'd make me feel better if someone could muster a rebuttal that explained with specificity why further improvements aren't going to be sufficient to breach the "smarter than human" barrier.
There's an existence proof in the sense that human intelligence exists and if they can figure out how to combine hardware improvements, algorithm improvements, and possibly better data to get to human level, even if the power demands are absurd, that's a real turning point.
A lot of smart people and smart orgs are throwing mountains of money at the tech. In what ways are they wrong?
Yes, if the entirety of your 'twist' on genre conventions and tropes is that the evil forces are actually 'good' or justified, without taking that anywhere interesting in the story, you're probably being lazy.
I dunno, I've read the case for hitting AGI on a short timeline just based on foreseeable advances and I find it... credible.
And If we go back 10 years ago, most people would NOT have expected Machine Learning to have made as many swift jumps as it has. Hard to overstate how 'surprising' it was that we got LLMs that work as well as they do.
And so I'm not ruling out future 'surprises.'
That said, Sam Altman would be one of the people most in the know, and if he himself isn't acting like we're about to hit the singularity well, I notice I am confused.
I personally, struggle to trust people I consider untethered. MBA types, lawyers turned CEOs, politicians. Top 0.1 percentile autists must excel. In the absence of a grounding domain, they start demonstrating excellence in accumulating Power. Power for power's sake. Sam is a perfect archetype.
You know, I feel almost exactly the same way. I just have an seemingly inborn 'disgust' reaction to those persons who have fought up to the top of some social hierarchy while NOT having some grounded, external reason for doing so! Childless, godless, rootless, uncanny-valley avatars of pure egoism. "Struggle to trust" makes it sound like a bad thing, though. I think its probably, on some level, a survival instinct because trusting these types will get you used up and discarded as part of their machinations, and not trusting them is the correct default position. Don't fight it!
I bought a house in a neighborhood without an HOA because I don't want to have to fight off the little petty tyrants/sociopaths who will inevitably devote absurd amounts of their time and resources to occupying a seat of power that lets them harangue people over having grass 1/2 inch too tall or the wrong color trim on their house.
That's just an example of how much I want to avoid these types.
Only recently have I noticed that either my ability to spot these people is keen enough that I can consistently clock them inside of one <30 minute interaction, or I'm somehow surrounded by them because I've deluded myself into thinking I can detect them.
One of the 'tells' I think I pick up on is that these types of people don't "have fun." I don't mean they don't have hobbies or do things that are 'fun.' I mean they don't have fun. The hobbies are merely there to expand and enable their social group, they don't slavishly follow any sports teams, they don't watch any schlocky T.V. series, and they probably also don't do recreational drugs (so not counting, e.g. adderall or other 'performance enhancers.'), although they can probably hold a conversation on such topics if the situation required it.
(Side note, this is why I was vaguely suspicious of SBF back when he was getting puff pieces written prior to FTX crash. A dude who has that much money and yet lives an ascetic lifestyle? Well he's gotta be motivated by something!)
In social settings they're always present, schmoozing, facilitating, and bolstering their status... but you notice they never suggest activities for the group to engage in or expend effort bolstering other group members status.
Because, I assume, they are there solely to leverage the social network to get something else that they want. And if its not 'fun,' if its not 'money,' and it isn't even 'sex' or 'admiration and praise,'... then yeah, power for its own sake is probably their objective.
SO. What does Sam Altman do for fun?
I don't know the guy, but I did notice that he achieved his position at OpenAI not because of any particular expertise in the field or his clear devotion to advancing AI tech itself... but mostly by maneuvering his funds around so that he could hop into the CEO spot without much resistance. Yes he was a founder, but why would he take a specific interest in THAT company of all of them, to turn it into his own little fiefdom?
I think he correctly spotted the position at OpenAI as the best bet for being at the center of a rising power base as the AI race kicked off. Had things developed differently he might have hopped to one of the various other companies he has investments in instead.
Finagling his way back into the position of power after the Nonprofit board tried to pull the plug was a sign of something.
I admit, then that I'm confused why he would push to convert to for-profit structure and to collect 10 billion if he's not inherently motivated by money.
My theory of him might be wrong or under-informed... or he just plans to use that money to leverage his next moves. That would fit with the accusation that OpenAI is running out of impressive tricks and LLMs are going to fail to live up to the hype, so he needs to prepare to skidaddle. It DOESN'T fit my model of a man who believes he is going to be at ground zero when the silicon Godhead is birthed, if he really believes that superintelligence is somewhat imminent, he should be willing to give up ridiculous sums of money to ensure he's present at that moment.
Anyhow, to bring this to a head, yeah. Him not having children, him being utterly rootless, him having no obvious investment in humanity's continued survival (unlike Elon), I don't think he has much skin in the game that would allow 'us' to hold him accountable if he did something truly disastrous or utterly anti-civilizational. Who is in any position to reign him in? What consequences dangle over his head if his misbehaves? How much power SHOULD we trust him with when his apparent impulses are to remove impediments to his authority? The Corporate Structure of OpenAI was supposed to be the check... and that is going away. One would think it should be replaced with something that has a decent chance at ensuring good behavior.
The added irony is that the election of Obama was sold at least in part as the final nail in the 'The U.S. is racist" coffin by accepting a black president over another stodgy white guy.
Like the symbolic importance was there, even if we grant that not all racism would evaporate and in fact certain racists would be inflamed by his election.
The lesson that instead seems to have been imparted is "IDENTITY POLITICS ARE EFFECTIVE!" and Obama himself ended up fanning racial animosity. I had such a turning point at the Cool Clock, Ahmed moment where he intentionally brought attention to a trumped up racial incident on the side of the grifters.
We sure felt (to me) ready to move 'past' deep racial grievance as a nation circa 2010, but I fear that it has turned into a spectacular method of forcing others to do what you want, so sociopaths will of course leverage this as much as they can.
Makes it sound like a bit of a cross between Borat and Bowling for Columbine.
While ultimately I think it isn't going to move the front of the Culture War forward because calling the left out on hypocrisy and lack of principles doesn't inflict much material damage, at least it shows the right how to fight.
If we assume full magitech then that seems like a viable solution.
But I've also read the book Blindsight, which posits the existence of a totally nonsentient (in the sense it has no self-awareness or internal dialogue) but superintelligent entity that simply evolved from the random permutations of the universe and its intelligence is literally just an 'emergent' result of its physical structure, and in a sense is inseparable from that structure.
That is to say the "mind/body" distinction pretty much doesn't exist for this thing in any sense. You can't just do 'brain surgery' to change its mind without potentially killing its body. And it is VERY hard to kill.
The book goes so far as to suggest that sentient beings are likely a tiny minority of intelligent life in the universe, as sentience is costly in terms of energy/computation, and mostly unneeded for survival, if you otherwise possess high intelligence.
This starts to blur the line between "natural force that doesn't care about your utility function" and "alien utility functions." I'm sure you could write up a theoretical 'cure' for this sort of thing, but imagine if it already had spread to and occupied the majority of the galaxy and was capable of undoing any cures you came up with.
If I were to imagine a major threat in the Culture universe, maybe posit a species/society that reached some level of near-equivalence with Culture tech, then decided to use their power to rewire themselves to remove their own sentience and make their own intellects a distributed, 'immutable' aspect of their physical structure so you cannot just hack their brain open to make changes. i.e. they make themselves as resistant to brainwashing/brain surgery as possible.
And now add in the parasitic angle: they intentionally work to make any other species/societies they encounter 'nonsentient,' without changing any other aspects of their minds. Just lop off the parts of the brain that generates sentience, because from their perspective, sentience is 'evil' or 'inefficient' and thus removing it is just a quick little surgery that no rational person would refuse.
Actually I realize this is basically just describing the Borg.
So yeah, maybe imagine if a society created "Minds" on par with those of the Culture, but these minds were basically running on Borg logic and were steadfastly devoted to 'peacefully' removing sentience from the universe by spreading their nonsentience through whatever means they can devise. Basically a hyperintelligent P-Zombie horde.
Indeed, that kind of matches with my thought above, about a society that shares the Culture's social mores except for one: "Do whatever you want at any time, but don't be self-aware while you do it!"
I am not certain the Culture wins a direct confrontation if the nonsentient civilization is equivitech and the Culture is fighting to to preserve sentience. If Blindsight's logic is right, then the sheer added efficiency of nonsentience means they will be better at fighting because they don't waste epicycles reflecting on what they do, they just act on their instinct at all times. I am positing that the Culture won't be able to buy them off to convince them to stand down.
Or if you want to amp up the challenge even more, accept Blindsight's logic that sentience is rare, and imagine that the Culture realizes that 90% of space around them is inhabited by these sorts of civilizations.
Indeed, now that I think about it, Banks' most optimistic assumption in writing his novels isn't so much that we'd manage to pull of friendly AI... its that the other alien civs out there would, whether they're sadistic, friendly, or straight up hostile to everyone, at least be sentient and thus one can deal with them through negotiation and social influence.
This is the case with the Gzilt in Hydrogen Sonata, who actually almost joined the Culture as founding members but stayed out because they see themselves as a chosen people because their holy text being surprisingly scientifically accurate. In the Culture universe this could mean all sorts of things, including sponsorship by Sublimed (functionally godlike) entities.
Haven't read that one yet, but you just bumped by interested in reading it by like 30%.
and they insist on their ships running their own emulated minds sped up.
And I already like them a bit more than the Culture! The "chosen people" thing would rub me the wrong way but if your religious text actually seems to be bestowed upon you by a higher being, and holds up to scrutiny for eons, I'd have strong feelings about it too.
They truly are special, in a way many don't want to lose. The Culture's attempt to weaken their caste system releases awful tendencies kept in check and leads to absolute disaster.
Guess I'll have to get to that one ASAP too.
It seems to me that you just have a chip on your shoulder about the guy.
I have a chip on my shoulder about a few guys who regularly make subtly fallacious arguments in favor of a position they support but who will never actually defend those arguments when pressed by someone with subject-area knowledge. As in, I've had to watch time and time again when somebody points out the error in the logic or brings in their own, seemingly superior data and these people will ignore it entirely and/or shift to a slightly different position.
Intellectual cowardice is an ongoing pet peeve of mine. All the moreso by parties who make their living on their analysis of reality. They seem to fill a niche that's a step above "blowhard cable news pundit" where people who want their priors confirmed but also want to think they're not being fed a line of biased tripe like the proles.
Again, this became blatant with Noah suddenly coming to the realization back in October that YES, the Left houses a LARGE amount of antisemitism whilst Conservatives/righties have been pointing this out for a really long time. He's happy to believe that the right tolerates antisemites and condemn them for it, because that fits his preferred conclusions. Simple.
I went back to his feed again and now I had to read the gem "Nuclear is a niche product." Which is 'true' in the broad sense (it is niche because nobody is allowed to build it!) but then he declares solar the superior product and when somebody reasonably calls him out on this he wouldn't even deign to respond.
"Nuclear is a niche product! Solar is our best bet!"
"Wait, by any fair definition Solar is in fact very niche and will remain that way for years to come, what are you basing that on?"
Clearly he's just engagement baiting at this point, but again, this is all decreasing the quality of discourse, and somehow this guy makes his living by the quality of his input to discussions. Compare that to Zeihan's take on the same issue, which means we can actually compare their accuracy down the road.
I've been going through some of Zeihan's predictions and he's a typical doomer, with all the bad predictions that come along with that. E.g. he made a very strong prediction that China would collapse in 10 years, and I don't think we need to wait another 6 years to see if this comes true.
On the other hand, I've yet to hear the good counterargument that demographic collapse won't inevitably lead to certain nations experiencing massive internal chaos.
China currently depends on imports for its basic energy and food needs, and the primary value they have to trade is a massive labor pool and, more recently, massive pool of consumers. And they admit that this pool is about to shrink sharply because the country's TFR has been well below replacement for decades now.
The cascade of effects seems straightforward:
- Shrinking population leads to fewer laborers AND consumers of end products.
- This decreases their ability to provide value to the global market.
- Which in turn decreases their ability to afford imports for energy and food.
- Which immediately threatens internal stability, as they will have to revert a lot more labor to their agricultural sector. i.e. deindustrialize.
- Which further decreases the labor pool available to provide value to the international markets, further hurting their economy.
- A bunch of people who got used to rising living standards suddenly see living standards crater.
- Unrest.
Which of these steps is wrong, or where can the CCP intervene to avert the end state?
He's been pointing out that the U.S. efforts to secure the seas for private vessels are likely to decrease, and with the Houthis continuing to interfere with shipping in the Red Sea there's already plenty of signs that this is accurate.
I see the Reddit comment and yeah, he definitely blew the 'two year' prediction and I'd not expect anything like that to occur even in the next two years, but I actually agree that we're in 'witching hour' times, where fat tailed impacts can occur on short time scales.
Indeed, the main reason I'm not a full doomer myself is that I'm seeing two possible futures, one where AI and automation lives up to the hype and manages to usher in a new industrial revolution, and one where things spiral out of control before we get true benefits from AI and we end up in a deep global recession. I honestly couldn't tell you which is more likely, but I don't see a likely future where we kind of just muddle along on the current path without something giving way.
Noah seems to try to put the optimist spin on things but doesn't seem to actually want to engage with the doomer's case in earnest. He mostly seems to say "line has been going up in the past, I believe line will go up in future." Which is fine... but not useful.
I mean, being honest, thats my ideal for a libertarian society. Maximum openness (which also means allowing people to form consensual 'closed' sub-societies) but also maximal 'deterrence' against outside interference.
Live and let live, until they don't let you live, then you end their life (if needed).
I think the issues that come up on the one hand is that a society of maximal openness usually viewing outsiders as having the same value as insiders because that's how you've organized the entirety of your society.
Like, how do you have a coherent definition of 'us' and 'them' when the whole ideology that you've built your society on is intended to remove that distinction entirely? You don't use such distinctions within your society, but how do you strictly define the boundary beyond which you do NOT extend the same courtesy?
And likewise, the problem of pre-emptive violence. When you can see with near-certainty that an outside force is going to attack (Russians massing troops on your border for a 'training exercise,' for example) and yet if you take action before the danger manifests you're sort of breaking your own rules. And if you can justify a pre-emptive strike, you can probably justify any other intervention, like targeted assassination and 'regime change.'
And now you're back to basically being Neocons.
They lie and blackmail Gurgeh into doing this mission for them, even though it is clearly doing Gurgeh considerable psychic harm.
Yes, ALTHOUGH I recall that Gurgeh was basically falling into a listless depression because playing games with no stakes was no longer satisfying, so in a certain sense, the mission was simply giving him what he wanted, and the Minds could be all but certain that he wouldn't be harmed.
As to whether they tricked him about how bad the society was or the narrator was reliable, I grant that's a clear self-aware critique of the Culture. Still, I imagine Banks' own ethics would conclude something like "you can judge a society by how it treats its worst-off members," where the Culture has nothing resembling poverty, whereas if we assume that the portrayal of Azad was accurate as to the existence of castes who were tortured at the elites' whim, then that alone justifies some sort of intervention.
Whether a full on coup and revolution was ethically defensible, I guess I'll leave that aside.
Moreover, the Culture's action against Azad is indisputably a case of unprovoked aggression. Azad have done absolutely nothing to the Culture. Azad aren't even able to do anything to the Culture. The Culture move in to destroy Azad purely because they find Azad's existence to be offensive to their enlightened liberal sensibilities.
You know I think I've also gathered as subtext (or maybe it was specifically stated at one point) that some of the minds get 'bored' with merely managing the Culture society and economy and will try challenging themselves to nudge other societies into joining the culture simply to alleviate that boredom, and perhaps on extremely rare occasions they miscalculate and trigger real war, which they KNOW they can win, but then the challenge is making it a 'just' war. Which leads into:
Consider Phlebas ended with its primary Culture character putting herself into cold storage until the Culture can mathematically 'prove' the war was justified, at which point she commits suicide as a kind of protest.
I was actually uncertain whether that particular portion of the book was Banks critiquing the Culture/minds for prosecuting a war despite being well aware of the costs it would incur, OR he was actually making a small jab at bleeding-heart liberals who want to enact change in the world but can't stand getting their hands even a bit bloody.
"You guys won the war and saved the day, but then couldn't stomach the actions it took to win unless your conscience could be mollified? Grow up."
yet something about it, something difficult to define but nonetheless there, feels wrong. That itching sense of wrongness is the point, it seems to me. Even if we struggle to define it, something here isn't right.
Agreed. And the answer for myself that I settled on is that humans in the Culture have no volition. They can't make anything meaningful happen that the minds aren't already planning. As we see, the minds will nudge or outright deceive humans towards a larger end goal. Humans aren't able or allowed to truly decide on the end goal. Yes, the Minds will put things to a democratic vote and 'abide' by the outcome, but the outcome itself is never in doubt.
So that feels like a subtle horror story to me. Humans are locked in a nature preserve and will never know if this was something they wanted or it was decided for them. That the walls are basically invisible and the guards are entirely benevolent doesn't change that.
From a storytelling perspective, you HAVE to make the Culture get a bit Dodgy or else you can't really derive conflict from a world of such abundance that is ideologically committed to nonviolence.
Yes, I haven't read all the books through to hear all of Banks' own self-aware critiques of the culture's self-aggrandized superiority.
But the books I've read tend to make the societies opposed to the Culture out as complete nightmares where any reasonable person, given the choice between the Culture and, say, the Idirans or Azad (or the affront,) would easily choose the Culture unless they were guaranteed to be in the upper echelons of the other societies.
I've wondered if there was a story that has the Culture encounter a rival power that matches their social mores in almost all but ONE critical way, and they abjectly refuse to compromise on that one difference for reasons that they cannot explain (and may not even know) but that is such a central, load-bearing aspect of their civilization that they simply cannot join the Culture if doing so would endanger that factor at all.
Banks certainly adds tidbits that make it pretty clear that the Culture is not literally perfect in every way. Sometimes there's even some hypocrisy and unnecessary suffering that results from it.
But it does still strike me as the final boss of "everyone would be able to just get along if we could talk things out" mindset.
Good post.
The absolute apotheosis of these kinds of fictional examples has to be Ian Banks' "Culture" series. The Culture, being a post-scarcity society that is run by nigh-omniscient AI, approaches every single potential conflict with outsiders with the idea that any rational society would inevitably prefer to join the culture and all it should take to convince them is to show off how perfect life is when you remove all hierarchies and social restrictions and accept the post-singularity as your lord and savior.
And when they encounter outsiders who resist, normally its just a matter of identifying which of the leaders are 'irrationally' opposed to joining the culture, and supplanting them through various means. In short, the culture has mathematically proven that the only reason someone would resist the culture is they're 'mistaken' in some way, and once you correct them, the conflict evaporates.
Or so that's my take on the philosophical underpinnings of the books.
I think that there's something to be said for writing your antagonists with serious nuance, or even taking a character that was described as 'pure evil,' and even having them act in line with that description, but then get into an explanation for why they are the way they are, and perhaps even write your story so to make them subtly heroic.
It can be a demonstration of skilled writing to flip the audience's emotional valence towards a character without technically changing anything about their basic traits and characterization. Perhaps not the most skilled or best example, but Snape from Harry Potter is one that every Millennial will think towards.
Disney, for example, has gone back and created origin stories for two of their outright evil villains, Cruella De Ville and Maleficent, and from what I gather (I haven't watched the films) they do manage to 'humanize' them and even maybe vindicate them?
I would say that making a character ontologically evil as a simple fact of your fictional world is a bit lazy and can work for the story but becomes unsatisfying if it really does seem like the conflict wouldn't exist but for them being evil. That is, there are obvious routes that the parties could take that would leave everyone better off but these are ignored or refused by the villain without explanation so the story can happen.
Side note, I also think this is why "revenge" stories are so popular. When one party has been wronged in an irreparable way, it makes perfect sense that the only thing they could want, their sole motivation, is to inflict harm on the one who wronged them. And that's a motivation that can work for both heroes and villains! Although you can also write in 'mistakes' to explain why the harm occurred at all, or give the offending party some solid justification for why they did it.
I also think that writing with the assumption that even the most heinous and gleefully malevolent beings are really just mind controlled or misinformed or are perpetuating a cycle of abuse or otherwise can be 'persuaded' of the error of their ways is pretty lazy, you inherently lower the stakes since now there is always an 'out' that the protagonist just has to find the correct words or a particular piece of information that brings the villain around and defuses the situation without forcing a final confrontation and, you know, making the Protag actually risk his life to save the day.
One thing I liked about the early seasons of Sherlock (RIGHT before it goes off the rails) is Moriarty literally just wants to fuck with Sherlock and will go to his grave to achieve it. There was never any outcome where Moriarty was convinced into joining the side of the angels, and if there was, it was because he wanted to be and presumably had some other plan involved.
I like my bad guys to have agency, to be aware, on some level, that they're hurting others and making the world worse, but choosing to do that anyway and being intelligent about how they do it!
I think I myself am a bit of a 'hybrid' theorist. That is, I mostly believe that most conflicts could be resolved by talking it out, recognizing which 'mistakes' each side has made, identifying a more peaceful option that benefits both parties, and avoiding the costs of a drawn out fight. Even if neither party changes their mind, they can probably find a way to peacefully co-exist rather than fight an existential battle that can end up killing both of them.
But... we live in a world of scarcity, and people can have utility functions that diverge enough that they can't easily be resolved without a LOT of effort. Sometimes, there are not enough seats on the lifeboat, everyone has strong reasons to want to live, and there is objectively not enough time to debate and discuss things such that one of the parties could be persuaded to sacrifice themselves. And thus things default to good old fashioned violence.
I believe that there are natural forces out there that don't care about your utility function. A tsunami can't be talked out of carrying your home and family away. There are creatures (mostly the parasitic kind) whose whole existence and reproductive cycle is based on making some other creature's life miserable. There are likely alien utility functions that value things that, if not quite the opposite of what you value, are so orthogonal that even learning of their existence might make you significantly worse off!
And perhaps most importantly, I believe there is a 'sanity water line' for humans, and only those above the line are truly capable of recognizing when a mistake has likely occurred, and that taking some time to discuss the matter will probably lead to a better outcome than immediately fighting. For those below that line, such negotiations and discussions probably won't bear fruit, and conflict may inevitably result.
And lets be clear, even those above the line can drop down below it under the right conditions or when confronting a particular sort of issue, and thus there is no real guarantee that a conflict can be averted if the otherwise rational participants are sufficiently aggrieved.
Now, all this is just to say, my general approach to people I seem to vehemently disagree with is "Assume mistake (either mine or theirs) until the conflict appears inevitable, then CONFLICT THE SHIT OUT OF THEM."
I suspect that the 'rational' calculus that leads to situations like Israel-Palestine is both parties determining that under foreseeable conditions conflict is unavoidable in the long run, and the other party believes this too, and thus they both have to avoid allowing the other party to gain an irretrievable upper hand. Even if they try to signal willingness to discuss mistakes, the core disagreement is unlikely to be solved before the conflict, so each side operates under the assumption that there will be conflict.
At that point, I think the main debate is not 'conflict vs. mistake,' but literally whether one should accelerate the conflict and get it over with or try to delay it as long as possible and hope for a miraculous intervention.
Sort of depends on what you're comfortable with ethics-wise.
For instance, I didn't go into Personal Injury law even though it promises to be lucrative because the entire area feels scuzzy and designed to take advantage of people at their most vulnerable.
But I found an area that pays well enough (if you put in the work) and doesn't require me to check my principles at the door.
I'd suggest that if you're excluding all other priorities and have lax but not completely discarded ethics, sales is the line of work that will end up providing the earning potential. If you're really good at it, you can move up to selling larger and larger ticket items and thus the commissions you receive will grow proportionally. Once you're at the level of, say, selling yachts to multimillionaires, and you've mastered the craft, I'd suggest that is likely the lowest effort/money ratio of most careers.
One of my favorite self-imposed challenges in CK II was to roleplay a Norse lord who got banished to Northern Africa, and whose sole goal was to restore his lineage to the throne of Norway.
Its nearly impossible to make decent progress with your starting character, and so you end up marrying other Northern African lords and the hybrid kids that result are not exactly likely to make a convincing case to the Norwegians that they are actually related.
I've never actually pulled this one off because the layers of machinations that are required to get yourself in position to actually invade the North lead to all kinds of distracting shenanigans.
Its interesting because we're entering a period where you can use a computer to determine with certainty the optimal moves in a given scenario. Stockfish does this for chess, but I'd wager that you could take any given computer game and machine learning could produce an engine which can beat 99% of human players at said game given the same input/output signals.
So if you want to give your players a crutch in game, just simplify the mechanics down to "let the computer suggest three mostly optimal moves, and let the player select from among them." Leave the actual mechanics of the game under the hood and invisible to the player, let the AI figure out how those mechanics play out, and then give the player the 'choice' that will actually move the state of play along.
In this scenario, the player who takes time to learn the mechanics and fiddle around under the hood and decides they will make decisions without the AI advisor is almost certainly at a disadvantage, there's no way they can discover a better move that the AI missed.
But is the player who is at least trying to develop mastery of the game having more fun?
MAYBE!
Sub optimal play is fine. Sometimes perfectly optimized play is just the enemy of fun.
If I'm not in an actual competitive environment, but rather am goofing off with friends or doing a good ol' comp stomp, or when I literally just want to enjoy myself and not sweat my ass off, then I try to optimize for FUN.
FUN IS SUPPOSED TO BE THE GOAL, if pride or money or some other incentive isn't on the line. Not sure why you'd be 'optimizing' your play without accounting for the "Am I having fun" variable!
So yeah, once a game has become so well understood that 'optimal' builds, strats, items, and such are everywhere, it loses almost all appeal to me because it squeezes out the room for experimentation and the 'game' is now just about following a set strategy with as little deviation as possible. I'd argue that when it is reduced to a contest of who can execute the proper script more accurately/quickly, it ceases to be very game-like, where the challenge comes from the unpredictable elements.
I blame it to a large degree on ELO making skill levels more legible.. Now if you're NOT using the optimal strats, but instead playing around, everyone can see your ranking and make judgments about you.
I don't know if we have a similarly objective framework for identifying how much 'fun' a person is having in a game.
One of my great joys playing old school games was TRYING to force the game into weird edge cases or find a completely unique path to victory by trying less popular strategies and using the mechanics in otherwise sub-optimal ways that could still combine in such a way as to lead to a good outcome. Or setting little sub-goals or handicaps for myself so I have to actually get creative rather than just follow the optimal strats that I've memorized.
I think good game design should make it possible to use largely ignored mechanics or combine weak items in such a way that, with a certain amount of risk, you can 'surprise' a more skilled opponent who was following an established strategy but literally never encountered the scenario you've created and thus either adapts quickly or loses.
Of course said player will immediately adopt that strategy if it replicates, and soon it just becomes the meta. And that takes the fun away again.
And if I'm not having fun first and foremost, unless something else is on the line, I'm just not going to spend time on it.
Bingo.
Trumps version of the same lie is something like "I've been told that the Teamsters like me a lot. They say over a million of them support me, can you believe that? Enthusiasm like you've never seen, the Teamsters are going to vote for Trump in massive numbers, just unbelievable numbers!"
Here's that cyberpunk future you ordered.
Seriously though "e-bike load-balancing grifter" is a job description right out of Snow Crash.
More than likely its some ex-programmer for the company who wrote or worked on the algo and just let someone else have it.
I kind of hate it in the same way I really despise hackers/exploiters in online multiplayer games. Yes yes, very clever, you're technically staying within the confines of the rules as defined by the computer code, but any other player can tell you that isn't how they intended to play the game and it ruins the point for them, can you spare any thought for that?
Sure, maybe the game dev/Lyft can update the code and fix things to be less hackable. But in the meantime you're making everything subtly (or not so subtly) worse for everyone.
Grumble grumble low trust society grumble grumble
ON THE OTHER HAND. I'm also not a fan of gamification intended to save a company money by offloading labor to users by using incentives that explicitly aim to change their behavior patterns. At least this one pays out actual money rather than amorphous reward points or 'achievements' that have no intrinsic value.
Ultimately it is impossible to make any system that is even slightly complex 'perfect.' There are always weird edge cases, and always tons of people motivated to find and exploit those edge cases until the weakness is patched. Either you foster a level of social trust high enough that people will intentionally not exploit these loopholes (and indeed, will be white-hats and report them on sight!) OR you can have a society that is wealthy enough that these niche 'parasites' aren't worth addressing.
Me, I would never even consider this kind of approach to making money (unless I was truly desperate) because there is absolutely nothing about it that is fulfilling to me, and I'd be very acutely aware that I'm basically imposing an externality on other users of the bikes.
But I understand and mostly accept that there are people who get a lot of 'fulfillment' out of finding out ways to exploit systems and 'get one over' on the powers that be and for them the mere knowledge that they're getting away with an unintended boon is probably enough motivation to do it. They like this better than being a sucker with a 9-5.
And they have a role in society too. It doesn't do to have your entire society simply ignore weaknesses in their critical systems because everyone is too polite and honest to comment on them, and thus vulnerabilities can persist until a catastrophe emerges.
More options
Context Copy link