This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
[Previous discussion here or here or here or here or here)
There's an interesting Atlantic article here. I don't particularly believe or disbelieve its central thrust -- that ice cream has a variety of possible health benefits -- for reasons I'll get into later, but one particular quote is rather startling if considered in any serious depth:
St_Rev pointed out that this is actually academic misconduct, but it's worth spelling how obvious this has been for over half a decade, even as no one called a spade a spade. Mozaffarian's conclusions say, in front the paywall, that "Higher intake of yogurt is associated with a reduced risk of T2D, whereas other dairy foods and consumption of total dairy are not appreciably associated with incidence of T2D." Behind the paywall, we instead find that not only did his methods give as good a set of results for ice cream, they gave better numbers in most, on a pleasant and cheerful chart that the peer reviewers either did not read or did not find incompatible with the paper's summary. So at least one author, with no small career or current-day position considers this the sort of thing that you casually joke about to a national-tier journalist, who in turn considers it not particularly worthy of highlighting.
Surely this is some schmuck that doesn't matter, widdling away his days in a glorified broom closet, writing papers no one cares to read at all?
Well, no. PubMed shows 125 papers citing "Dairy consumption and risk of type 2 diabetes", Google Scholar gives over 400 citations. St_Rev points to his efforts on a hilariously bad and hilarious broad Food Compass proposal, though at least that proposal largely hit a dead end. But he's gotten appointed to federal boards by Presidents. That's not automatically going to make him the next Wansink, who managed to change contents of store shelves across America based on numbers he just made up -- it's not even like Mozaffarian's known misconduct is even a small fraction as bad! It's a nitpick, ultimately, and one that may eventually not even fall to Mozaffarian as opposed to some coauthor.
But it's not a nitpick anyone cares about.
Now, that's just nutrition science. Everyone knows the entire field's garbage, whether or not it drives policy; the literature is filled with hilarious stories like this, and not just starting from inside.
What about medicine and materials safety? Those who've read [Scott's recent review of Rob Knight's From Oversight to Overkill will have seen a small mention of research misconduct:
The full story is a little boring, so to tl;dr: Doctor Alkis Togias proposed a study where healthy volunteers would first reduce some parasympathetic nervous system response using hexamethonium bromide, then use to methacholine induce asthma attacks. By doing so, they could better understand the role the parasympathetic nervous system had on asthma.
((name recognition is !!fun!!))
While methacholine was commonly used for this purpose, hexamethonium was not; it had started out as an anti-hypertension drug and had largely fallen off the market as other, better drugs in that class arrived. This wasn't exactly a treatment, contra Scott, so much as an attempt to test specific models of asthma. In many ways that made the death of a volunteer in the trial more shocking. It's not entirely clear what exactly happened -- Ellen Roche first reported feeling ill before the hexamethonium exposure -- but it's pretty likely that the drug was a large part of why her lungs failed. What drove the sizable regulatory response, though, was that the risks of Hexamethonium Bromide exposure were Known in older literature... kinda.
Togias had four studies safe showing use of the drug, some for similar pulmonary research. Older papers pointing to some were harder to find at the time, but even if located it's not clear how relevant they'd be. The studies he did locate were small studies, totaling only 20 participants, but not only were they allowed under similar IRB reviews, they didn't describe even minor complications.
... with an emphasis on "describe":
It's not clear how robust the other three studies were, when it came to accurate description of the observed behavior, but that single study would have given 10%, alone enough reason to take a closer review. (Lest this come across as a defense of Dr. Togias, one of his own patients had this class of side effects just days before Mrs. Roche's fatal exposure; Togias did not report those complaints nor wait until the ill patient recovered.)
In the intervening decades and in response to the death of Mrs. Roche, medical studies have expanded the extent side effects are reported to review boards. If you wonder how well that would have help someone reading through the papers, who did not have access to the internal review board records of distant schools? Well...
Space is an in thing right now, so what about space? 1I/‘Oumuamua is a space thing, that got into a lot of news reports as the first interstellar object, including this paper in Nature arguing that it was an ice comet with some interesting traits. In response, Avi Loeb argues instead that the calculations used in the Nature paper are entirely incorrect. Which happens, if true. What's more interesting is how Loeb claims Nature responded, when faced with a question of fact:
Now, Loeb is a bit of a
nutjobeccentricadvocate of thinking outside the box. And we only have his word that his physical models are more correct, or that Nature editors say what he claimed.Of course, if he is a nutjob, he's a nutjob feted by a hefty list of big names and organizations, including Harvard and the President. More critically, he's got no shortage of papers in high impact journals, both conventional papers and op-eds in Nature, none with asterisks. So either Nature isn't willing to correct a paper whoopsied thermodynamics, or is willing to publish this style of author, or both.
Well, it's not like normal people do anything with space. Outside of speculative fiction and some astrophotography, few of us are ever going to need to think more than a few hundred miles away from terra firma. Even for scientists working in the field, it's not like anyone's putting Freeman Dyson's blueprints to action. So there isn't much value riding on things, really, beyond people's egos.
Speaking of egos, anyone heard of the Hirsch-Dias feud in superconductors? Jorge Hirsch is best-known for the proposing the H-index metric in academic publishing, but more charitably also for a number of models to explain high-temperature superconductivity. Ranga Dias is the leader of a team working out of the University of Rochester, doing high-temperature high-pressure superconductivity work, some of which conflicts with Hirsch's models. If you read a pop-sci article about carbons-sulfur-hydrogen superconductors, metallic hydrogen, or lutetium hydride, his lab's the actual group in question. The two don't like each other, and it's been a recursive mess of papers seeking retractions being removed. Right now it's looking mostly like Hirsch called it, though there are still some Dias defenders, in no small part because a few of the challenged works were replicated or 'replicated' by other labs collaborating with Dias. The latter option is a damning indictment of international condensed matter research.
I don't own a diamond anvil. There's only a few major labs around the world that do, and of those not all experiments are trying to replicate this stuff. Why would anyone care?
(Outside of diamond anvils being pretty expensive to use as glorified magic-8 balls, and teams of physicists not being cheap either.)
There was a snafu around a different proposed superconductor in August, with significant coverage and attention after a coffee merchant on Twitter gave pretty long (and somewhat overstated) list of possible (if not likely) benefits. Somehow, the grapevine produced a feeding frenzy as increasingly varied hobbyists tried to mix the stuff up, sometimes literally in their kitchens. It turned out to not work, to the surprise of absolutely no one who's followed superconductor revolutions in the past. Indeed, the biggest surprise is that this seemed to be an honest and weird result which simply failed to pan out, rather than the typical fraud or instrument error.
Dan Garisto criticized this while the various LK99 replication efforts were cooking, as science as a live sporting event, where hype distorts funding and attention to near-random focuses. It's a little awkward a criticism coming from Garisto, who's a 'science journalist' himself with no small impact on where people focus (and it's not clear Scientific American proper lives up to his standards, but it's not wrong: several labs looked at and spent a couple days reviewing a series of papers that otherwise would have only received minimal attention. That's why we're pretty sure the initial experiments were performed as described, but mismeasured diamagnetism and semiconductor behavior. There's still some people looking at LK99-related research, and I might even put it very slightly more likely than all of Dias' work panning out, but that's damning with faint praise.
The alternative to serious replication isn't "we saved time on something from testing something that was useless." It's not knowing, one way or the other.
Which gets me to my actual point.
EDIT: Not just that ice cream clearly harmful or healthy, or that hexamethonium bromide's harms were or weren't known, or Dr. Togias was or wasn't responsible for Mrs. Roche's death, or 1I/‘Oumuamua is or isn't a comet, or carbons-sulfur-hydrogen or LK99 superconductors work or don't work. It's not even that we don't know about these things, or would struggle harshly to find them. I can give answers, to some small extent and with little confidence.
It's that you shouldn't or can't treat these massive systems as much more earnestly engaged in finding those answers than some rando online, and you shouldn't trust that much, either.
(For the record, probably not great or bad barring diabetes and the numbers are a selection effect, dangerous but undocumented, not really but should have tried harder, it's a rock, no, no.)/EDIT.
As a concrete example, I'll point to this paper. I have absolutely no idea if it's real or not. The entire field of covetics has an absolute ton of red flags, most overt in the sheer extent and variety of claimed benefits, but also the extent some papers look like someone just shook a can of 'nano' prefixes onto the summary to spice things up. On the other hand, while Argonne National Labs does that buzzword-sprinkling too... well, Argonne doing it is a pretty strong point in favor of it not being completely made up. For whatever it's worth, there is no wikipedia page, and Dan Garisto (and Scientific American) haven't found it worth examining.
But describing it as copper++ or aluminum++ is... if a bit of a exaggeration, not much of one. For a tl;dr, the proposed material trades off some additional manufacturing complexity (and ultimate bend radius) against vastly improved hardness, flexural strength, corrosion resistance, heat- and electrical- conductivity, even some weird things like capacitance. There are few fields using these materials where this would not have significant benefits.
If real.
Even if 'real', to any meaningful extent, it may still not be useful: there's a lot of manufacturing constraints, and the very traits that make it impressive-sounding may make it too annoying to work with. Great conductivity is a lot harder to use if the material can not be reasonably drawn as wire, for example. Excellent corrosion resistance doesn't help if it's tied to vibration microfractures, as early titanium development discovered.
But even before those considerations, there's a bigger problem that I'm not sure I can trust any of this more than some random youtuber mixing up the stuff. The literature has a lot of conflicting claims, which might be a process matter and might be more serious fucking around; the real-world progress of the lab supposedly doing the most with the stuff (maybe holding the patent?) literally involves a RICO suit. Weird behaviors like that are common-place in scientific and industrial developments that end up working out! They're also a lot of skulls.
In an ideal world, I could feed the academic literature into a big spreadsheet, average things out, and get a nice number. In this one, I can get a number; I'm not sure it wouldn't look like this.
And this is a case that matters, in the way a lot of science really matters. You could, as an individual or small business -- pending licensing agreements -- make or purchase a batch of this stuff, today, and implement it, perhaps with a sizable amount of trial and error, and if it were real, find significant benefit.
Would you want to make that bet? Because in a revealed preferences sense, no one has yet. And while every business decision is a risk, there's reasons this risk seems undesirable, despite hundreds of thousands if not millions of dollars worth of past efforts supposedly promoting public understanding.
What happens if someone does? I'm not sure even successes would be well-documented, but the academic disinterest in negative replication, even from fellow researchers, is well-known. I don't expect it would be taken any better from industry randos, were tired businesses in a huge rush to document their failures. Would even moderate success be something that could be meaningfully presented through academic means? How much could any mean, if an author or publisher can choose to drop any detail they want from discussion and still be taken seriously long after?
Or is this the sorta sphere where magics, in both the optimistic and pejorative sense, just float forever slightly out-of-reach?
I feel like this ties in with a bunch of other ideas that have been kicking around in my head for the last couple weeks. I'd started writing an effort-post on the different concepts of credibility and legitimacy tentatively titled "Dammed Science" with the "Dammed" in place of "Damned" being an intentional pun, but it's still, just not there yet.
That said the bit I think is relevant is something that my boss' boss' boss said during our end-of-year townhall meeting/pow-wow. She said (and I am paraphrasing here) I don't like science and I don't like scientists because there is nothing stopping a scientist from spouting bullshit. The only people who might call them on it are other scientists and scientists are a cliquey bunch. Meanwhile an engineer works under much more rigorous circumstances. An engineer cannot bullshit the way a scientist might because any lay-man can look with their own two eyes and see if the airplane flys, the bridge stands, or the gadget works.
The conclusion of the speech essentially boiled down to "and that's why you're here" careers in Science and Academia are masturbatory, it's engineering that pushes the boundaries and drives progress.
I've caught a lot of shit, admittedly not without cause, for being "anti-intellectual" but I feel like that criticism of misses the point. I'm not against intellectualism per se as much as I am skeptical of "the intellectual" as a class. Oh, studies say X? It would be a shame if someone actually put that theory to the test. ;-)
As a Science person with engineering degrees who doesn't like to do engineering1, I am suuuuuper skeptical of other Science people. More of them are simply actively bad at their jobs than is remotely acceptable, and you are 100% right that many of them face no repercussions from this due to the fucked up way the system evaluates work. Furthermore, totally agreed that the engineering folks have a much more visible benchmark for things working, and that is incredibly useful.
That said, if I were to defend those among my people who are good, I would say that one cannot reductively claim that it is only engineering that is pushing boundaries and driving progress. The story I once heard that might resonate with you was that if you were wanting to invade and occupy a country, you need four different types of people: spies, marines, army, and police. The spies have to be there early, get the lay of the land, a sense for what's going on, background information that informs choices of what it is that you're going to try to do and why. Once you have some idea, the marines have to go establish a beachhead, so that you can start to bring serious resources to bear on the problem. Then, the army has to very practically churn through huge piles of materiel, kicking in skulls and establishing concrete facts on the ground. Finally, once you've occupied the place, the police need to maintain order and keep everything somewhat functional.
The analogy is that the Science types are the spies. We try to figure out the lay of the land, when you don't even have a clue as to what types of things may be possible or not. The experimentalists who bridge the gap, pun possibly intended, between the scientists and engineers are the marines; they are often operating on shoestring budgets, trying to read our shit, figure out which ideas are most plausible, and cobble together at least some sort of proof of concept that it could actually work in the real world. Then comes the literal army of engineers. I admit that I'm a little jealous of how they get to see their stuff actually work, but maybe it's their ridiculously fat budgets that I'm more jealous of. They have to very practically establish routinized ways for the idea to consistently work in practice. Finally, you have the cops who maintain the whole thing and are more supposed to interact with the 'customer' to make sure that their needs are being met. Presumably, if you just try to dive in to a country with just your army, with no intel and no established beachhead, one could see the inherent difficulty of pushing the boundaries and driving progress. Maybe you could still get there, but damn if the endeavor isn't likely to blow even fatter budgets of even more obscene amounts of materiel, possibly toward goals that simply don't make any sense and are eventually doomed to failure, which you might have known if you had a proper understanding of the lay of the land.
Now here's the part of the analogy that I've come to add, but which I think makes sense. Not only do you need different types of people for these different jobs, but the way you evaluate the work that is being done in each stage is completely different. There is no sense in which you're going to evaluate a pre-invasion spy by the same sort of metric that you're going to evaluate the face-kicking army. It is, frankly, an unfortunate fact of reality that the nature of the work of spies leads to the possibility that they could totally bullshit you, and it can sometimes be very difficult to tell truth from falsehood. I don't know any honest-to-goodness real life spies, but I really wonder if they have some sort of similar dysfunction/skepticism toward each other that we Science types have toward our own. I also wonder if there just is a significant population of them who kind of suck at their job, the way many of ours do, but don't face many consequences because of the inherent difficulties of evaluation.
1 - I do math, and it's a tossup on whether reviewers will actually pay close attention to whether my proofs do, indeed, prove my theorems... or if they'll even bother reading the proofs and instead make their judgment entirely on the basis of shit like how many of their own papers I've cited.
EDIT: After reading @TheDag's comment, I would amend this by saying that your spies have a very analogous failure mode that is really really bad for you - double agents. They're actively working against you, against providing you knowledge of the truth, and for the adversary. This can be widespread, but also sort of localized. For example, if the Soviets totally convert your spy network there, they can completely wonk up your knowledge of what the hell is happening there, but maybe you still have perfectly good coverage of China. I would agree that there are vast swaths of the social sciences who have been entirely captured. They're worse than just having an evaluation problem; they're an adversarial problem.
I like this analogy specifically because spies are famous for their insane fuckups due to lack of oversight and a conviction that their ends are more than important enough to justify their means.
Shit like MKUltra or the way multiple separate US agencies have financed and supplied various militias and cartels without any control over them are public knowledge, but by the very nature of spying there's probably 5 fuckups for every one that goes public.
And that's the big ticket items, a spy who just collects a steady paycheck while not gathering any useful info and/or sends back fictional info because that's way less risk is too common a WW2 story to even be notable.
But, like science, this doesn't mean that spying isn't a useful job, just good luck controlling it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link