@hooser's banner p

hooser


				

				

				
0 followers   follows 0 users  
joined 2022 October 02 12:32:20 UTC

				

User ID: 1399

hooser


				
				
				

				
0 followers   follows 0 users   joined 2022 October 02 12:32:20 UTC

					

No bio...


					

User ID: 1399

I also failed at demonstrating my humanity. I would appreciate knowing what's expected, in case the press-and-hold becomes more in use.

If I use AI for critique and not for writing, would you still expect disclosure? Like, here's an example of AI use:

Me: I uploaded a draft of my thoughts on X. Give me a thoughtful critique.

Claude: What great thoughts on X! Now that ass-kissing is out of the way, here are some critiques. (Bullet points, bullet points.)

(Version A)

Me: I want to incorporate your ninth critique. I uploaded a revised draft. Give feedback that will help me improve on this point.

Claude: That's a unique take on the subject! Here are some ideas to strengthen your argument: (Bullet points, bullet points.)

(Version B)

Me: I want to incorporate your ninth critique. Rewrite my draft to do so.

Claude: I will rewrite your draft: (Writes an academic article in LaTeX.)

Version A is more like asking a buddy for feedback and then thinking some more about it, while Version B is like asking that buddy to do my thinking for me. Even in an academic setting, Version A is not only fine but encouraged (except on exams), while Version B is academic dishonesty.

I would like the norm on TheMotte to be against Version B, but fine with Version A. Would you agree? And would you still like a disclosure for Version A, and in what form? (E.g., "I used DeepSeek r1 for general feedback", or "OpenAI o3 gave me pointers on incorporating humor", or "Warning: this product was packaged in the same facility that asks AI for feedback".)

Has a C. S. Lewis quote for that.

And of course, if you don't see the utility in something, don't install it into your body. I have nothing against people who are happy with their existing bodies and minds, I just desire otherwise for myself.

Ultimately, it's good to have early-adopters like yourself around. You are the willing guinea-pigs for the rest of us. So I will gladly root for your success from the sidelines of techno-cyborg progress. If it gets me a spider-chair instead of a wheel-chair by the time I need one, I'll be happy.

I strongly expect that additional senses will, while distracting initially, fade into the background until salient

Years back, I had corrective laser eye surgery. It was great to not muck about with glasses (old clunky technology) or soft contact lenses (newer, more streamline technology). But I also found all this sharp focus quite distracting, especially during that first month when my long-distance vision was better than normal. Like, when driving, my attention would get drawn and fixed to those five-paragraph-essay parking rule signs ("parking permitted during A, B, C, except at X, Y, Z"). I had to re-train my brain to de-prioritize written signs. And yes, as you point out, eventually those signs indeed stopped drawing my attention, fading into the background.

But as a counter-example, my husband gets ear-worms. He goes into a store, and comes out with some inane pop song playing in a loop in his head for the next three days. Attention isn't as aligned to our needs as we'd like it to be.

All your examples present the idea of various sensory technology whose use is so seamless it feels both natural and unobtrusive. I agree on this: if one needs (or wants) to use sensor technology, seamless is better than clunky; and if one needs (or wants) to have continual or immediate access to the sensor technology, then it's hard to imagine something more seamless than an a permanent augmentation that your brain fully adapts to.

Our disagreement rests on all those ifs. I have far more senses than I have attention, and my attention is very limited and therefore precious. I spend more time trying to minimize sensory input than augment it. I'm not just talking about earplugs and blindfolds for when I try to rest. Like, filtering out background noise when I talk to someone. Ignoring visual distractions when I read.

Do I really want to add ultrasound sense? Why, what am I going to do with that information? And do I need that info with continual or immediate access, all the time, to justify an implant?

By the way, you can totally do that with current technology: take a hearing aid, set it to receive ultrasound. You'd still need to use some of your actual senses for receiving the input, like taking those ultrasound waves and translating them down to normal hearing range. That will unfortunately interfere with you hearing the usual sounds, and if you don't want that, you can use some of your less-used senses. Like, have it be a vibrating butt-plug or something. I'm sure one can train the brain to distinguish different vibration pitches after a while.

OK, let's focus on the use-of-tongue-for-sight. How many hours a day are you, personally, willing to spend in wearing a device that's exactly like BrainPort but geared for detecting ultra-violet light?

Basic humans don't see ultra-violet, but bees and birds do; flora and fauna have evolved to incorporate ultra-violet signals. Wouldn't you like to experience this aspect of the world directly? All you have to do is wear some specialized glasses with a specialized ultra-violet-light camera on the bridge of the nose, connected to a hand-held base unit with CPU and some zoom controls, which in turn is connected to a lozenge stuck to your tongue. You train yourself for a while, figuring out what those funny electrical-shock feelings on your tongue correspond to. I guess you'd need to use some kind of visualization on the monitor, with artificial coloring to highlight the ultra-violet. And after a while--yay!--you can "sense" ultra-violet!

Or, you know, you could just look at those visualizations with artificial coloring, like the rest of us basic humans, and skip the wearing of glasses connected to a hand-held unit connected to the lozenge on your tongue.

BrainPort is a big deal for blind people, because so much of our human infrastructure depends on sight. Similarly, a bee might be utterly lost without that ultraviolet sense, but just how crucial is it for me to see it, and if I have any technology able to sense it, wouldn't I just use that instead of wiring myself up to some gear and retraining my brain?

What about a BrainPort device that's geared towards infra-red? Wouldn't that be cool, see the world like Predator? Or, again... why not just put on some infra-red goggles?

Why in heaven's name would I want to sense WiFi? Isn't in enough that my WiFi-enabled devices do that?

Let's disambiguate reality from science fiction here. Neuralink's implant is indeed a cool breakthrough that, with much training, allows a person to control a cursor without the use of arms or legs. This is very cool for people who can't use arms or legs, or much of any other practical use for the electrical signals going down the spinal cord.

The Neuralink's Telepathy (TM) is completely one-way: the device is reading the electrical signals in your spinal cord, and trying to interpret them as simple cursor commands. It does not send you secret messages that your brain magically decodes. It does not read any part of your mind. It doesn't know which thoughts produced the particular configuration of electrical signals, and what if felt like to have those thoughts. It doesn't know or care whether, to generate the signal that it interprets as "left-click", you had to visualize yourself naked dancing on the piano, or imagine yourself shitting. You do whatever works.

For the able-bodied among us: we have far-superior telepathy (not TM) of amazingly fine-tuned control of arms and legs. We have the amazing telekinetic (not TM) ability of moving stuff with those arms and legs. How much of that control would you be willing to sacrifice, to devote some of the electrical signal going through your spinal cord to an external device? For what purpose?

That's a responsible use of AI as a tool to refine your thinking and communication. I place that in the same bucket as using spellcheck or a calculator. Similarly, I would not expect a disclosure of the tool's use.

Sounds like she did the responsible thing and gave you the relevant info, now it's your-body-your-choice. There are ways you can minimize the chance of getting herpes infection. Even so, I would not advise it for anything less than your near-future wife. As your friend can attest, getting stuck with herpes for life tanks your love life.

If I wanted to read polished prose expressing RHLF-ed opinions on charged current-events topics, I would read the New York Times.

The Motte expects thoughtful engagement. Someone posting AI output in place of their own ideas is being neither thoughtful nor engaged.

I support having a rule disallowing blocks of AI-generated text, with exception of meta discussions (like, "Claude output X to prompt P; here's what I think", or "Given prompt P, O3 says X but R3 says Y; here's what I think.").

I have a suggestion: when you post parts of your work-in-progress and want feedback, start with a personal note that says so. I've been skipping these posts (sorry, by now something that looks like LLM doesn't get my attention), but I won't if I know that there's a human who is developing thoughts and welcomes feedback. Best of luck on the book!

Dogs: companions during good times, food during famine. That's what I tell my mutt. She'll have to learn to share the voles and frogs she hunts, if times turn bad.

I second the choice of OnShape! Yes, it's free for educational purposes. I volunteer at a local after-school STEM program, and we have elementary-school kids CAD designs for 3D printers.

OnShape looks a bit intimidating at first (it wasn't UX-ed to death), but there are lots of videos on YouTube that do a how-to. Get to the point where you can make a "Sketch" of something simple like a polygon, and then "Extrude" it, and you're on your way to make interesting designs. In my experience, so long as the kid is coordinated enough to use a mouse, he'll get comfy with the basics faster than an adult.

Best of luck!

Thanks for the thoughtful response. There is indeed a danger on "overtraining" Kindness on family (and by extension kin and friends) if one takes the Confucian idea of family being the root of Kindness. I think the metaphor still holds: a tree sapling that has healthy roots but fails to grow is a failed tree.

The advantage of training in Kindness on the people you actually know and interact with is that it gets quickly apparent why Kindness is a hard virtue to achieve. Especially in the original sense of virtue as a moral force, a form of personal excellence that is actually useful in accomplishing something. If your father is eating himself into an early grave, what's a Kind way to dissuade him? If your teenage daughter is driving herself insane with social media, what's the Kind way to wean her off? Is it even Kind to meddle into their affairs? Are you sure of the superiority of your judgement? These questions get much harder, the nearer the people are to you.

Whereas if I train in Kindness on strangers, the typical failure mode is that it devolves into simple politeness.

Kindness as virtue is similar to the Confucian highest virtue of [Ren](https://en.wikipedia.org/wiki/Ren_(philosophy)), which I have seen translated as "humaneness", "beneficence", or "kindness". Kong-Fu Tze came up with the term himself, and the kanji 仁 is literally two radicals: 'man' and 'two' (or 'also'). Like "kind-ness", in the sense of considering someone else as like yourself or your kin.

(I promise to have a question for you in the end, after I set up the premise.)

Confucius (or rather his school) falls within the general framework of the Chinese political schools of thought of the time, which rests on three main questions: What is the Way (to fix the society)? What virtue (in the sense of personal power) does one get for following this Way? What kind of society does this Way lead to? (I'm loosely paraphrasing Van Norton's intro to classical Chinese philosophy, which is excellent.)

So Confucian school regarded the virtue of Kindness as power, which makes sense: if you understand another person, does that not give you power to guide that other person in a way closer to your goals? The Confucian school also was adamant that this very useful power is hard to obtain. To truly be Kind, you need to spend years studying people, starting with those closest to you and whose foibles you are most familiar with. Thus the school emphasized family as the root of Kindness: if you can be Kind to your grouchy out-of-touch parents, your annoying siblings, your infuriating spouse, your disobedient children... well, then you're onto something. (In particular, maybe then you can transfer that power to being Kind to your grouchy out-of-touch boss, your annoying co-workers, your infuriating office mate, and your duty-shirking underlings.)

So my question for you is: do you regard the virtue of Kindness as something hard to obtain, something that requires years of diligent study, as opposed to a more common notion of "kindness" in a sense of good disposition or well-intention? And if you do: how do you go about obtaining this virtue? (I suspect that, as a modern progressive, your answer would be substantially different from Confucius.)

I just don’t think that a serious attempt at real data collection is going to happen for societal reasons

In "The Typical Man Disgusts the Typical Woman" post Update (the post that y'all are discussing), Caplan links to Emil Kirkegaard's analysis of four much more representative data collections:

  • General Social Survey (GSS), USA
  • NLSY Add Health, USA
  • Wisconsin longitudinal study (WLS), USA
  • German General Social Survey (ALLBUS), German

In all of these, the OK Cupid's stark disparity in ratings do not reproduce. Women's photos do tend to get slightly higher attractiveness ratings, but, you know, there's probably a reason why both men's and women's magazines are full of half-naked women.

Maybe you're right. I have drifted away from watching documentaries in the past decade, and even then my preference was for nature and science themes. It's possible that the standards of presenting evidence have significantly changed (deteriorated?) since then.

They experienced it, and they felt that the information from that experience could best be communicated to me by those fictional films. They felt it captured something about what it felt like.

I agree that the value of fiction is, among other things, in its success in conveying emotional truths. If "how it felt" is best conveyed with disturbing depictions of atrocious savagery told in flat matter-of-fact manner (like in Tim O'Brien's "The Things They Carried"), then that's what the author does. Nobody need question whether this specific instance of atrocious savagery happened, or even whether this type or this level of atrocious savagery happened somewhere in this time-and-place. Nobody need question such thing, because that's besides the point, so long as the depiction serves to convey "how it felt".

The problem arises when fiction gets presented as historical fact. I would have a problem if a documentary on Vietnam intermixed historical footage with scenes from "Apocalypse Now", while Tim O'Brien reads excerpts from "The Things They Carried", especially if the intended audience is not familiar with either work or the author and thus is unaware that they are works of fiction.

I did watch it on Kanopy, through my local library.

I am watching a film about a subject about which I have, at best, a cursory knowledge; how much can I rely on its factual claims? If it's a work of fiction: not at all. Just enjoy the story. Some historical background might be rooted in fact, but I am not in a position to tell. If it's a documentary: I expect that factual claims are, to a large extent, true. Sure, I expect a documentary to cherry-pick its facts to present a compelling narrative, but that's what distinguishes a documentary from a work of fiction: the narrative is constrained by at least a few asserted facts. It's common for a film-maker to outsource the actual statement of facts to an expert. The expert bears the cost of getting the facts wrong; the film-maker bears the cost of choosing experts poorly.

So here I am, watching this acclaimed documentary about a topic I know little about. It shows archival-looking clips, with meticulous citations -- I trust that those clips are what they appear to be. It shows quotes from diplomatic archives, with meticulous citations -- I trust that those quotes are from actual diplomatic archives. Twenty minutes in, it (for the first time) appears to have an expert contextualizing the main subject, again with citations. Do I continue to extend my trust to the presented facts, confident that a meticulously researched documentary would feature solid expertise in the subject matter?

The two-minute narration is a mixture of factual claims and narrative spin; yes, I understand that "Congo Inc." is a metaphor, but: Did Congo's rubber really "smooth[ed] the way to World War I", or is that a terrible pun? Was Congo's uranium key to US bombing Hiroshima, or was it just the most convenient source? How much did Congo copper contribute to the devastation of Vietnam, and how much of that devastation would have happened with other sources if Congo's copper was not available?

What is this guy's expertise in, anyway? Fiction. He is a writer of fiction. He may be a very good writer, and he may even meticulously research the background setting for his novels. But he claims no historic expertise. And the work he's reading in claims no historical accuracy.

There was a saying from this past election, something like "Trump lies like a used car salesman, Harris misleads like a lawyer." (No offense to lawyers.) The film didn't lie, it misled, and it misled subtly; it misled about the apparent level of expertise.

If I were already an expert in DRC history, it wouldn't matter. As an expert myself, I would evaluate any claims by their content not provenance. But I am the opposite of an expert; I can point to DRC on the map and I have some vague knowledge of the 20th-century history of Sub-Saharan Africa in terms of de-colonization and a tug-of-political-influence between USA and USSR (and later China). So I cannot possibly evaluate these claims by their content, and I must cautiously rely on expertise. That's why presenting someone as an expert when he's not is a big deal for me.

So, maybe, this film just isn't for me? Maybe it's aimed at people who are far more knowledgeable about the subject matter, who would not possibly mistake the expertise of Bofane? The film-maker is, after all, Belgian, and maybe the local audience is far more steeped in the history of the country's former colony. But I don't buy it. The film premiered at the Sundance Film Festival, it's clearly shooting for a broad audience.

I would therefore like to make a prediction, even if I am too lazy to actually carry it out. Let's say that a poll is conducted among the film's audience. The poll takers watch the two-minute clip of Bofane talking (22:56 to 24:19). Then they respond to a question like this one:

Which best describes "Congo Inc.": (a) an academic publication by a professional historian, (b) a non-fiction account by a professional journalist, or (c) a work of fiction by a professional fiction writer.

My prediction is that less than 10% would choose (c).

What an impressive propaganda technique. That's my one-line review to the "Soundtrack to a Coup d'Etat", and I mean it most sincerely. I really am impressed.

This quote from a New York Times film critic serves both as a quick plot summary and as the main impression the film conveys:

... a sprawling film that's a well-researched essay about the 1960 regime change in the Democratic Republic of Congo and the part the United States, particularly the C.I.A., played.

Let's focus on the "well-researched" part, the part that lends the film a documentary gravitas, the propaganda technique I so admire.

The documentary is a collage of footage, archival audio and video clips, and quotes with careful citations that briefly appear on screen. It doesn't have a narrator--except occasionally it does, like from 22:56 to 24:19, where English text quoting In Koli Jean Bofane's Congo.Inc overlays archival footage while the said author reads his work in original French:

The algorithm Congo Inc. was invented Africa was carved up. Capitalized by Leopold II, it was quickly developed to supply the whole world with rubber and smooth the way to World War I. The contribution of Congo Inc. to the 2nd World War was key. It provided the U.S. with uranium from Shinkolobwe that wiped Hiroshima and Nagasaki from the face of the earth while it planted the concept of 'mutual assured destruction'. During the so-called Cold War the algorithm remained red-hot. It contributed vastly to the devastation of Vietnam allowing Bell UH1-Huey helicopters, sides gaping wide, to spit millions of copper bullets from Kolwezi over the countryside from Hanoi to Hue via Danang all the way to the port of Haiphong.

Here's the beauty: "Congo Inc." is a work of fiction. It is a novel. It is not, and never claimed to be, an accurate and contextualized account of history, nor is it subject to the kind of critique for accuracy that a work of non-fiction would receive.

The technique allows the film to convey the impression of historical gravitas while absolving it of any responsibility for truth, accuracy, or context. What is there to criticize? All the film does is feature a Belgian writer connected to Congo by birth and some years of residence, reading from his work. It's a work of fiction--so what, when the main theme of the film is to suggest the interweaving of art and politics. The film's omission of the category of the work is completely in line with their omission of such information about their other sources. Surely the film has done its due diligence by accurately citing the sources, thus providing any interested viewer with the requisite information to establish the necessary level of epistemology for the content of any citation it happens to feature. If anything, it's a mark of respect for the sophistication of the viewer that the film doesn't bother contextualizing these works, since surely the viewer is quite familiar with both the history of Sub-Saharan Africa in general, and prominent literary works of authors with Sub-Saharan African ties in particular.

Yes, its Sundance Festival Special Jury Award for Cinematic Innovation is well-deserved. I look forward to future adaptations of this technique, where documentaries about the CIA quote John Grisham's novels, and documentaries about the Catholic Church quote "The Da Vinci Code".

Clear commitment to a shared future helps. Seven months apart is not that long in light of years of marriage. My husband and spend several months apart while engaged, and I know married couples who spend a few years working in different states (and even countries).

Does anything prevent the two of you from getting engaged?

Sounds like you are most excited by the possibility of switching to HR. You can:

  • (1) Get the SHRM-CP (Society for Human Resource Management Certified Professional) certification. You can do it on your own time and at your own pace.

  • (2) Ask your employer to either transfer into their HR full-time, or to take on more HR tasks. If the latter, ask that your title reflect this new responsibility.

  • (3) Document all the HR-adjacent tasks you have already done, and keep documenting it. Use the documentation to negotiate (2).

  • (4) If for some reason you look into (1) and decide against it, "Professional in Human Resources" certificate from Human Resource Standards Institute is also good.

  • (5) If you are working through (1) or (4): learn how to use an AI to help you learn. I recommend Claude. Don't let it do your thinking for you, but do use it as a broadly knowledgeable tutor who sometimes goes off the rocker (so validate any concrete piece of info you absolutely need to rely on).

Good luck!

and currently am doing my PhD in Baltimore

Let's have some straight talk about the unspoken expectations of PhD and beyond.

Successfully finishing and defending your dissertation means very little if you haven't used your time while in the PhD program to establish a strong professional network. Without the latter, all you have is an extra line on your CV (or resume), and there are plenty of others out there with a similar or more impressive-sounding line in their CV. This is true even if you turn your dissertation into several publications, and even if those publications actually find readers beyond Reviewers #1 and #2. None of that is a substitute for a strong professional network.

Fortunately, building a strong professional network in graduate school coincides precisely with your desire for a community. Right now, you have fellow PhD students in relatively close physical proximity and in sufficiently close sub-fields / fields, pursuing similar-enough goals. All want to successfully complete their dissertations. All are working on something that (at least at the beginning) they found interesting. Quite a few of them will be your future professional colleagues. Building a strong professional network starts with organizing your fellow PhD students into a mutually supportive network.

Does your department have a weekly graduate student seminar, where grad students can present an interesting article or some partial progress on their dissertation? If yes, attend it and present in it, and hang out afterwards to casually discuss stuff with the presenter. If not, organize it. Ask your department head for pizza funds, chances are pretty good they'd be thrilled that someone is willing to take on the organizational task.

Are you in a program with too few grad students? Well, are there grad students in adjacent programs? It's very useful to be able to talk about your research to people outside of your field, and a bit of cross-discipline pollination goes a long way. Again, ask for pizza funds.

Have the seminar repeat at a regular time, so people get used to it being a thing. If weekly is too frequent, have it bi-weekly. Or first and third Thursday of the month. Invite undergrads that are heading into similar fields. Invite professors; quite a few appreciate the opportunity for low-stress chats about something in or close to their field. If there are local people in the industry that are relevant to your field, invite them too; industry people can bring boots-on-the-ground perspective that academics miss.

Do you or your fellow PhD students take classes? If yes, do they have informal study sessions? If yes, make a point to attend those. If no, organize one. It could start small: just you and one other student, and then make it generally known that others are welcome. Have it at a regular time and place, and be consistent about showing up.

Have you stopped by the office of every professor in your department to chat about their research? Do that. Ask also about the social aspects of their field: Where are the people who work in that field? Is it a more-or-less cohesive group, or are there rival factions? What conferences / forums do people in this field use to informally exchange ideas? Which journals do they value, and which are junk?

Are there local or regional conferences in your field? Do go to those. Preferably, organize some of your fellow PhD students to come with you. If there aren't... maybe there are, but you don't know it. Chat with your professors. Baltimore is plenty big, and it's close to so many other centers of academia.

And yes, by all means go running and attend church. Touch grass. Do what you need to keep healthy and grounded. But understand that, at this juncture, those are unlikely to be the communities that you'll keep.

I'm fine with accepting that Saul of Tarsus is not only a historical figure, but that the legends about him are sufficiently close to what happened to that figure in reality (+/- miracles). I am fine with having a high likelihood of a historical Jesus, and that this man was an object of a cult following, though I find it unlikely that the historical Jesus would match the Jesus of Christian mythology to any reasonable degree. I doubt the existence of a historical Judas, he's too convenient as a one-stop-scapegoat literary character.

For the purposes of the game hydroacetylene proposed, I am primarily interested in the literary characters of Jesus, Paul, and Judas, and I would consider their historicity only because it makes the read-the-Bible-as-if-it-has-unreliable-narrator more plausible. They can then write some "tell it like it really was" books.

Let's say that for two out of the three of these figures, there is a lack of evidence outside of biblical literary traditions, which could well be apocryphal.