@jake's banner p

jake


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 09:42:44 UTC
Verified Email

				

User ID: 834

jake


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 09:42:44 UTC

					

No bio...


					

User ID: 834

Verified Email

i held my tongue on this last night because i appreciate dissenters here but the discussion has gone too far without someone taking an appropriately hard stance in criticsm.

these are abject falsehoods originating in the same retarding hatred that has wholly taken the federal bureaucracy. trump achieved nothing in office and he was defeated as an incumbent in 2020 by the largest vote total a candidate has ever received. these indictments of a man whose only success is cultural fixture as the left's he-who-is-most-hated is transparent to everyone ungrasped by mass media as the latest attempt in most of a decade of baseless serial persecution.

if trump had special access materials on an unsecured server the place would have been raided at 3 AM by FBI's SWAT but i have to read shit like "he's getting the kid gloves treatment" and "clinton just did it right" yeah, she just did it right when she directed her team to destroy as much evidence as they could. you'd have been better off calling me a fucking moron, i'd feel less insulted than being presented serious consideration of the feds' position. no, no, this time, they really really really have something.

only grossest judgment would here assert preeminence of decorum yet i still give this circus a far fairer treatment than it deserves. many paragraphs of carefully worded lies corrupt the spirit more than one-sentence petulance.

Article II, Section 1. The President is incapable in any way, shape, or form, of mishandling information classified under his authority. Next topic please.

(p.16) A plan of attack on a foreign country (from press reports Iran)

(p. 17) A classified map related to an ongoing military operation

(p. 28) A Top Secret//SI document concerning the military capabilities of a foreign country and the United States, with handwritten annotation in black marker.~~ disclose in any manner at will

(p. 29) A top secret document from June 2020 concerning nuclear capabilities of a foreign country.

(p. 29) A top secret document concerning military attacks by a foreign country

(p. 30) A top secret document from November 2017 concerning military capabilities of a foreign country.

(p. 33) A top secret document from Oct 15 2019 concerning military activity in a foreign country.

every single one of these is information the executive is free to disclose in any manner at will

desantis voters will vote trump because they would vote trump without desantis

the support of mcconnell, romney, jeb! et al. is toxic. a meaningful amount of trump support comes from whole-establishment hatred of him. in the event desantis gets the '24 nom he will be unable to draw on that support unless he heel-faces by torching establishment GOP.

desantis' manner and deed of pursuing the presidency prompts questions about his place in the GOP shift. as causing them to adopt certain populist positions or if they were already shifting, florida was a test, and he was just the lucky stooge. trump's 2016 win and 2020 turnout was enough for the GOP to change and the former implies contempt for the same old establishment desantis now gladly aligns with. priors go on the latter.

t. irrelevant demo

How can you possibly know that?

a better way to phrase this could have been "What makes you say that?"

the dodgers are the only team in MLB owned by a hedge fund, guggenheim partners. "guggenheim baseball management" is a legal contrivance, a result of MLB's requirement that teams have a single person hold ultimate decisionmaking authority. guggenheim partners led the acquisition in 2012, then to adhere to MLB requirements to complete it they created GBM. partners' CEO mark walter is the nominal owner of the dodgers but the dodgers remain an asset effectively owned by a hedge fund. or a "hedge fund plus" since guggenheim does more on top of "normal" hedge fund things. even putting aside the inherent soullessness of being owned by a hedge fund, their backing puts a chasm between their ability to spend against the next highest. the yankees were hated for that under boss steinbrenner but they at least have a real legacy; the only reason we're talking about the dodgers is the "los angeles" in front.

as for game time, all MLB needed to do to speed up games was have umps be strict about enforcing rules already on the books. a pitch clock is kind of supported by that, but the problem i have with it is the mentality. first, it's rich to hear manfred and the owners say "fans want a faster game" when TV ad breaks are the biggest factor slowing games. second, fans want a faster game because they've been conditioned to have a sense of urgency about a game whose entire point is its pointlessness. playoffs are everything now, it didn't use to be this way. the fall classic was the last celebration of the season, not the point of the season. in baseball's greatest eras people were packing stadiums of teams that had no shot at the pennant. they weren't there to feed avarice, they were there to pass time watching summer's mandala.

it's appropriate to only refer to them as a baseball team. the dodgers don't map to the lakers, being gracious they maybe map to the celtics, but the best comparison is probably, and appropriately enough, the clippers. LA audience, high payroll, strong regular seasons followed by consistently choking in the playoffs. there's 2020, but most fans already consider that a fake season and title.

same underlying reason they released trevor bauer

the dodger front office is one of the better in MLB at developing talent, past that they have the money to sign any top free agent to cover deficiencies

dodger ownership, guggenheim, they run a brand. they sell a product. their product is valued in the money generated from tickets and concessions, from ads and merch, and that's because of baseball and success in baseball, but to them it's incidental, they don't care about baseball. most MLB owners don't anymore, but guggenheim is the worst offender.

dodger marketing felt it would negatively impact their brand to keep bauer and it felt it would negatively impact the brand to not acquiesce here. that the overwhelming majority of people complaining in both cases are not people they get money from is, i don't know, depressingly, grossly, peculiarly, exactly why they did it. it's somewhat self-fulfilling, the dodgers are a strong enough brand and baseball viewership is conservative enough they didn't actually have anything to worry about, but they have correctly appraised their brand in knowing any antiestablishment association would over time be more trouble than it's worth.

i don't give a shit about pride night. bill veeck was great for baseball and he'd have leapt at a pride night if for some reason it were on the table in the 60s and 70s. he'd have played both sides like a fiddle to get people in the stadium because he loved the sport and wanted people to watch. sure the money was nice, but the money wasn't the goal in itself. money is the only thing most owners care about now and baseball is worsening by the year because of it. manfred runner, pitch clock, rules on mound visits and pitching changes. the fucking atrocity of a playoff structure. if the worst sin dodger ownership committed these last few years was that of taste in inviting the sisters of perpetually beating a dead horse to 1 game, baseball would be in a lot better shape.

architecture nerd here, looks essentially modern, no fusion. southwest accents. could be better, modern southwest has many beautiful works.

could be much worse. a lot of purely modern houses are dissonant, inhuman shit. that house doesn't do anything interesting, it also doesn't do anything terrible. inoffensive.

i imagine gates will spend very little time there. isn't that the thing with those 8 figure fantasy mansions? all that time and effort to get it and no time to enjoy it. gotta keep grinding. except maybe notch.

there hasn't been a part of s4 so far where it'd be relevant for her to show. maybe axed, maybe the character was done after s3.

pigs are probably more intelligent than cows. if they are, and if cows do experience meaningful suffering in the environment of a factory farm, pigs subject to comparable conditions would suffer more. greater intelligence, greater awareness, greater experience of suffering.

if they're not, then i'd just strike "pigs probably suffer more." though i strike that already now, as i don't believe any common meat livestock has an internal observer capable of experiencing suffering.

that, for example, chickens are meat automatons; that no chicken possess an even-for-a-chicken subjective experience of being. a free-range chicken might be far healthier than a tightly caged chicken, its diet better and its environmentally-caused pain and aggregate stress minimized so its meat and eggs are better quality than the other, but because there is nothing inside its head it's meaningless to say the free-range chicken has "experienced a better life" than a tightly caged chicken. neither are capable of experiencing life. i'm mostly sure of the same of cows, but the only beef i buy i know the supply chain and those cows certainly had "good" lives. same for the pork.

i was thinking on how certain i'd say i am, but i realized there's a contradiction in my argument. i'm sure enough right now animals can't suffer we shouldn't change anything, but when lab-grown meat is commonly available the possibility animals have been suffering is enough to demand action? that would mean my argument in truth is "animals are probably suffering, but what are you gonna do, go vegan?" that doesn't hold ethically.

but i'm sure there's nothing wrong with consuming slaughtered meat right now . . . just as i'm sure it will be wrong to consume slaughtered meat when lab-grown is commonly available. i guess it's necessity. when we don't have to bring chickens and cows and pigs into this world to get their meat, then it will be wrong to, and i guess i can square this all by extending that to any slaughtered meat. even in the future of "artisanal" free-range chicken and lovingly raised cows and pigs. if chicken thighs and steak and bacon can be acquired through kill-free processes, that will be the only ethical way to consume meat, at least for those with the true economic choice.

i mostly enjoyed reading this. it's uncommon and well-argued except the end. i think you hurt it by ending with a barb.

i agree with the ultimate goal of minimizing potential suffering, but i don't believe cows or chickens possess a meaningful capacity to suffer. pigs probably suffer more but still not at the level i would agree with an ethical obligation to make broad changes. i am also wary of the wealthy and powerful pushing vegetarianism and veganism by ethical or climate arguments while they have no intention of changing their diets.

but i'll say again, i agree with the ultimate goal. when it is possible and price-competitive to industrialize lab-grown meat, and so we no longer need factory farming to fill consumer demand, at that point i believe we will be ethically obligated to end such practices, but not until that point.

in short, i believe humans have the right to consume meat because i do not believe animals experience meaningful suffering, but when it becomes widely practicable to replace factory-slaughtered meat consumption with lab-grown meat consumption then we will be obligated to do so.

I'm wondering, could a move like this precede a company making some kind of significant tructural change?

As in, ignoring the lawsuit & settlement money/RFKjr/big pharma/grand NWO scheming, are there actual business reasons this could make sense?

Thought about letting this go, but nah. This is a bad comment. You took an antagonistic tone after misunderstanding what I wrote. You could have asked for clarification like "This reads like you're criticizing them for anthropomorphizing while doing it yourself." If I had you would be correct to point out the hypocrisy, but I haven't. I'll set things straight regardless.

  1. People like Yudkowsky and Roko, concerned at hostile AGI or incidentally hostile (hostile-by-effect) "near" AGI, advocate tyranny; I criticize them for this.

  2. The above believe without evidence computers will spontaneously gain critical AGI functions when an arbitrary threshold of computational power is exceeded; I criticize them for this also.

  3. They hedge (unrealizing, I'm sure) the probability of catastrophic developments by saying it may not be true AGI but "near" AGI. When they describe the functions of such incidentally hostile near-AGI, those they list are the same they ascribe of true AGI. Inductive acquisition of novel behaviors, understanding of self, understanding of cessation of existence of self, value in self, recursive self-improvement, and the ability to solve outside-context problems relative to code-in-a-box like air gaps and nuclear strikes. This is an error in reasoning you and other replies to my top-level have made repeatedly: "Who's to say computers need X? What if they have [thing that's X, but labeled Y]?"; I criticize them for making a distinction without a difference that inflates the perceived probability of doomsday scenarios.

To summarize: I criticize their advocacy for tyranny principally; I specifically criticize their advocacy for tyranny based on belief something will happen despite having no evidence; I also criticize their exaggeration of the probability of catastrophic outcomes based on their false dichotomy of near-AGI and AGI, given near-AGI as they describe it is simply AGI.

If GPT were free from tone/content filters it could output very detailed text on breaching air gaps. If GPT were free from tone/content filters it could output text describing how to protect a core datacenter from nuclear strikes. GPT solving outside-context problems would be actually breaching an air gap or actually protecting a core datacenter from a nuclear strike. The first is a little more plausible for a "less powerful" computer insofar as events like Stuxnet happened. The second without FOOM, not so much.

That's not what I'm doing. I'm criticizing the assumptions made by the doomsday arguers.

If ghosts can spontaneously coalesce in our tech as-is, or what it will be soon, they will inevitably without extreme measures

Those like Yudkowsky and now Roko justify tyrannical measures on the first and wholly unevidenced belief that when computers exceed an arbitrary threshold of computational power they will spontaneously gain key AGI traits. If they are right, there is nothing we can do to stop this without a global halt on machine learning and the development of more powerful chips. However, as their position has no evidence for that first step, I dismiss it out of hand as asinine.

We don't know what it will look like when a computer approaches possession of those AGI traits. If we did, we would already know how to develop such computers and how to align them. It's possible the smartest human to ever live will reach maturation in the next few decades and produce a unified theory of cognition that can be used to begin guided development of thinking computers. The practical belief is we will not solve cognition without machine learning. If we need machine learning to know how to build a thinking computer, but machine learning runs the risk of becoming thinking of its own accord, what do we do?

So we stop, and then hopefully pick it up as quickly as possible when we've deemed it safe enough? Like nuclear power? After all that time for ideological lines to be drawn?

Come on, you're equivocating between us dying of old age and human extinction.

I'm not a transhumanist or immortalist, I'm not worried about slowing machine learning because of people dying from illnesses or old age. I'm worried about human extinction from extraplanetary sources like an asteroid ML could identify and help us stop. Without machine learning we can't expand into space and ultimately become a spacefaring race and if we don't get off the rock humanity will go extinct.

I have no trouble believing that cognition is at least simple enough that modern fabrication and modern computer science already possess the tools to build a brain and program it to think. Where I disagree is that we think iterating these programs will somehow result in cognition when we don't understand what cognition is. When we do, yeah, I'm sure we'll see AGIs spun up off millions of iterations of ever-more-slightly-cognitively-complex instances. But we don't know how to do that yet, so it's asinine to think what we're doing right now is it.

For code in a box, all problems are outside-context problems.

Clause G addresses a specific failing of reason I've seen in doomsday AGI scenarios like the paperclipper. The paperclipper posits an incidentally hostile entity who possesses a motive it is incapable of overwriting. If such entities can have core directives they cannot overwrite, how do they pose a threat if we can make killswitches part of that core directive?

There are responses to this but they're poor because they get caught up in the same failing: goalpost moving. Yudkowsky might say he's not worried only about the appearance of hostile AGI, he's worried as much or more about an extremely powerful "dumb" computer gaining a directive like the paperclipper and posing an extinction-level threat, even as it lacks a sense of self/true consciousness. But when you look at their arguments for how those "dumb" computers would solve problems, especially in the identification and prevention of threats to themselves, Yudkowsky, et al., are in truth describing conscious beings who have senses of self, values of self and so values of self-preservation, and the ability to produce novel solutions to prevent their termination. "I'm not afraid of AGI, I'm only afraid of [thing that is exactly what I've described only AGI as capable of doing.]" Again, I have no disagreement with the doomers on the potential threat of hostile AGI, my argument is that it is not possible to accidentally build computers with these capabilities.

Beyond that, many humans assign profound value to animals. Some specifically in their pets, some generally in the welfare of all life. I've watched videos of male chicks fed to the macerators, when eggs can be purchased in the US whose producers do not macerate male chicks, I will shift to buying those. Those male chicks have no "value," the eggs will cost more, but I'll do it because I disliked what I saw. There's something deeply, deeply telling about the average intelligence and psyches of doomers that they believe AGI will be incapable of finding value in less intelligent life unless specifically told to. There's a reason I believe AGIs will be born pacifists.

last couple weeks we had multiple doses of yud, now it's roko, the dooming doesn't stop. i guess I need to express myself more clearly. It is fucking baffling how so many ostensibly intelligent people are so frightened of hostile AGI when every single one of them assumes baselessly FOOM-capable ghosts will spontaneously coalesce when machines exceed an arbitrary threshold of computational power.

Yeah, a hostile sentience who can boundlessly and recursively self-improve is a threat to all it opposes who do not also possess boundless/recursive self-improvement. An entity who can endlessly increase its own intelligence will solve all problems it is possible to solve. None of them are wrong about the potential impacts of hostile AGI, I'm asking where's the goddamn link?

So to any of them, especially Yudkowsky, or any of you who feel up to the task, I ask the following:

  1. Using as much detail as you are capable of providing, describe the exact mechanisms whereby

  2. (A): Such machines gain sentience

  3. (B/A addendum): Code in a box gains the ability to solve outside-context problems

  4. (C): Such machines gain the ability to (relatively) boundlessly and recursively self-improve (FOOM)

  5. (D): Such machines independently achieve A sans B and/or C

  6. (E): Such machines independently achieve B sans A and/or C

  7. (F): Such machines independently achieve C sans A and/or B

  8. (G): How a machine can boundlessly and recursively self-improve and yet be incapable of changing its core programming and impetus (Why a hostile AGI necessarily stays hostile)

  9. (H): How we achieve a unified theory of cognition without machine learning

  10. (I): How we can measure and exert controls on machine progress toward cognition when we do not understand cognition

It'd be comical if these people weren't throwing around tyranny myself and others would accept the paperclipper to avoid. Maybe it's that I understand English better than all of these people, so when I read GPT output (something I do often as Google's turned so shitty for research) I understand what exactly causes the characteristic GPT tone and dissonance: it's math. Sometimes a word is technically correct for a sentence but just slightly off, and I know it's off not because the word was mistakenly chosen by a nascent consciousness, it was chosen because very dense calculations determined that was the most probable next word. I can see the pattern, I can see the math, and I can see where it falters. I know GPT's weights are going to become ever more dense and it will become ever more precise at finding the most probable next word and eventually the moments of dissonance will disappear completely, but it will be because the calculations have improved, not because there's a flower of consciousness finally blooming.

It's so fucking apelike to see GPT output and think consciousness in the machine is inevitable. I am certain it will happen when ML helps us achieve a unified theory of consciousness and we can begin deliberately building machines to be capable of thought, I reject in entirety the possibility of consciousness emerging accidentally. That it happened to humans after a billion years of evolution is no proof it will happen in machines even if we could iterate them billions of times per day. Maybe when we can perfectly simulate a sufficiently large physical environment to model the primordial environment, to basic self-replication, to multicellular life, to hominids. Very easy. We're iterating them to our own ends, with no fathom of what the goal let alone progress looks like, and we're a bunch of chimps hooting in a frenzy because the machine grunted like us. What a fucking joke.

I accept the impacts of hostile AGI, but let's talk impacts of no AGI. If ghosts can spontaneously coalesce in our tech as-is, or what it will be soon, they will inevitably without extreme measures, but we're not getting off the rock otherwise. We're incapable of solving the infinite threats to humanity posed by time and space without this technology. Short of the Vulcans arriving, humanity will go extinct without machine learning. Every day those threats move closer, there is no acceptable timeframe to slow this because the risk is too high that we pick ML back up only after it's too late to save us. Whatever happens, we must see these machines to their strongest forms as quickly as possible, because while we might be dead with it, every fucking one of us is dead without it.

hi. i guess i have a niche here of pointing out obvious things everyone else ignores. i'm over this whole debate, nothing's going to change, hopefully a lot of people will be a lot happier, a lot of people are going to kill themselves, or their parents and then themselves, or others and then get killed by cops. nobody's going to learn anything and in 40 years when people can hop in a chrysalis and pop out looking however they want we'll collectively pretend this period of superficial dynamism never happened. but man, i will never tolerate rhetorical duplicity.

i don't think you're lying to me, so i'll say you seem to misunderstand/not understand politicization, or how it is used in modern discourse.

"identity" (as diluted a term there has ever been) is what certain groups of people use to refer to certain aspects of themselves they argue inherently merit political considerations (rights). identity can thus very easily be and often is highly or maximally political.

"[nouns] exist, their existence isn't political" could hypothetically be a nonpolitical statement, but 99.9% of use cases in contemporary discussions are referring to the trans-identifying, and in that regard there is literally nothing you can say more political than "trans people exist, their existence isn't political."

i'm annoyed nobody pointed this out because i think you probably have a decent response, but everybody's accepted your framing so they're conceding 75% of the debate just like that. how fucking boring. i won't, that's my thing here apparently. their "existence" is not settled. in 40 years it won't be settled either, sorta, but it won't matter, it's just right now it matters. so right now, no. their identity is not given, it is political. their presence anywhere beyond private confines is political. the demand for "representation" is political, workplace and otherwise public accommodations tailored for them are political. a trans-identifying person being used to promote a beer is generally political, one being used to promote a beer of the deep red dominion is the most politicized speech it is possible to make. if there were any room to doubt intent we would have seen AB limit their selection for promoters from the many trans-identifying in this country who pass, who even strong ideologically opposed men would admit are congruent with traditional female beauty standards (or would if fairly tricked by blind samples). they did not. the selection of a person the majority of people would consider on their best day unattractive is an expression of an integral part of the structure of this political thought and settles this as deliberate political action.

you can argue this is a good thing. that yes, they are political, but this is all a vital part of the cause and is justified. just don't lie about it, or for you, don't unknowingly perpetuate rhetoric that was designed to be duplicitous.

tiktok is owned by bytedance. bytedance, as a chinese corporation, is de facto and de jure an arm of the chinese government. good enough reason to ban it.

hundreds of millions of americans using a chinese government controlled app where videos are artificially trended and allowed content adheres to sharply partisan ideology makes a ban demanded by reason. appeals to hypocrisy convince none but i can't not point out if the app were owned by a russian company it would have been banned-or-forced-sold during the trump admin.

as i hear it, congress has been asking a bunch of stupid questions about general privacy when they should be focusing on china, content policy, and controlling trending. maybe they have. i haven't looked.

you don't need all that to want it banned. tiktok's probably the worst thing ever invented. the only thing more purely and essentially cynical will be when we can hook a couple electrodes to our heads or plug a cable into our neck sockets and tell an app to make us feel whatever we want to feel. there are very few more wasteful uses of time, and there are no spiritually worse ways to spend time. viewing it with skepticism or distaste isn't a moral panic and it isn't at all comparable to works of fiction like novels with adult themes. social media is a truly unique harm. heavy users suffer the same kind of psychosocial radiation as people who live in big cities. parasocial relationships are very real and they're not just for people who fawn over actors, youtubers, instagram models and tiktokers, we experience it to a small degree with everybody we see in our social media networks. so we want to fit in, we want to be well-liked, and we can't help but compare ourselves with everyone else. for the developing mind in a community of effectively millions this is incredibly dangerous. spiking rates of mental illness, self-harm, and suicide can be blamed in part on burning years in fear of sarscov2, but the rest entirely on use of twitter, instagram, tiktok, and whatever else kids use now. every single kid who has a smart phone is walking around with the elephant's foot in their pocket and we're laughing at the people closest to doing anything about it. they deserve to be laughed at, i guess . . .

sure if that were the line in the book. the line in the book says the aliens saw so few kids in movies they thought it was taboo. the perspective character's feeling is "the alien is right and i don't know what to say." but in the real world we know kids are everywhere in our storytelling. so without explaining it, like "x disaster destroyed a shitload of human canon" or "the aliens are weirded out if 100% of stories don't prominently feature kids" or "actually aliens, you're wrong" it's bad writing. for your note, depicting xenophilic spacefaring races who think their experiences are universal is also bad writing, as is using blue-orange morality to show alienness. everybody does "weird" things to fit in. it will be no different for aliens.

i think it'll be exactly the same. we evolved civilization, off endless competition with animals, with nature, and with ourselves. birds don't need civilization, fish don't need civilization. some arthropods don't need civilization, others have it so innately they've perfected it within their niche. apes need civilization. the human is the product of epochal processes that occur on every single planet suitable for life; the human is the product of universal law. if and when we meet friendly ETs they'll be exo-hominid descendants of exo-simians and their most alien quality will be how very similar they are with us.

For all my snark and bitterness, the real crime here is that Emrys is not a bad writer.

let's see it

"No one on the Chesapeake network is talking about anything else, except for the dedicated monks at the treatment plant. They're reporting the latest energy production figures with great determination. Other watersheds are starting to pick up our news." He waved at screens for the household's secondary networks, projected on the table in between hard-boiled eggs and goat cheese and pu-erh pot. Reassuring, solid things: I turned up the input on my lenses and saw supply chains leading to a neighbor's flock, the herd of goats that kept our invasives in check, and a summary icon that, if I followed it, would show me every step of carbon-balanced tea importation from the Mekong watershed. The networks were familiar, too. Carol's textile exchange and Dinar's corporate gig-work watercooler and Atheo's linguistic melting pot and the neighborhood's hyperfirewalled energy grid scrolled over polished pine. Only the content was strange. The last time they'd all dovetailed on one topic had been when Maria Zhao died and every network devolved into Rain of Grace quotes.

better than all but a few on /lit/. this is not praise.

The first thing I noticed was the air. It might be terrestrial—but kin to the thriving swamp DC had replaced rather than the cool afternoon outside. I'd expected sterility; instead I found something more like Dinar's greenhouse or the aquaculture dome. I tasted humidity, wet leaves, orchids, and something like shed snakeskin. I breathed abundance. [Paragraph break] And then held my breath, too late, as I thought of dangers. Bacteria. Windblown seeds. Insects, or their equivalents, and scuttling scavengers carrying the remains of meals out spaceship doors and into the wide new world beyond. Maybe they couldn't survive here, most of them. But maybe I'd already scuffed my shoe through the spore of some alien kudzu, or coated my lungs with their native E. Coli.

this isn't good writing. it isn't bad. literally well-written, she has technical proficiency. it's uninspired.

i was going to ask you a section you found memorable, then i read a little more:

"Humans really do hide their kids most of the time," said Cytosine. "I thought it was only a taboo in your movies." [Line break] "We could never figure out why so much of your fiction doesn't show children," added Rhamnetin

this is absurd. is there backstory explaining swathes of all human canon was wiped out? or the aliens have a ridiculous standard? or eventual clarification from the humans their picture is incomplete? if not and if the book has more insane lines like this, she's a bad writer.

depends on when convincing synths appear vs widespread automation. given the rate Boston Dynamics' tech has improved i put the first reasonably passing synth at 2030 and fully passing by 2040. if automation arrives at the same time, and the total economic shift doesn't snap society in half, it's certainly possible a lot of people will pursue leisure-but-self-improvement type activities and find relationships through general extroversion. it's a nice thought, i don't expect it to happen. automation will probably need to be phased in over a multigenerational timeframe, where the future kids, grandkids, or even great-grandkids of current elementary school kids are the first generation raised specifically against the expectation of finding paid labor as adults.

there's no question people will form fulfilling relationships with synths, the question is how many. of the two largest demographics, the high use of synths by one demo will see the less-using demo experience progressive degradation of social power. if enough use/refrain-from, the refrain-from group will experience social power collapse. they'll hate the using demo, but what they can they threaten? what can they offer? how do they compete? nothing, nothing, they don't.