felis-parenthesis
No bio...
User ID: 660

It would be more focused to target buy-borrow-die by expanding the definition of realization to include using the asset as collateral for a loan. Buy for $100, take out loan for $90 secured on asset, no tax liability. Notice that the asset is now more valuable. Convince lender that the increase in value is durable. Take out another $90 loan secured on the asset. Now you have realized $180 so a $80 gain becomes taxable, and you have money (the loan) to pay it without having to sell the asset.
thrombosis with thrombocytopenia syndrome, sometimes abbreviated to TTS
You might prefer this earlier version. It offers no real world examples.
Notice the problem with earlier version: it is too abstract. It gives the reader no reason to care. Of course we care deeply, but to get specific is to bog down due to the high emotion of those specific cases.
And then what? The traditional language of "defensive alliance" will automatically derail the discussion because it elides the vital distinction between chaining alliances and isolating alliances.
Scott's post is worth re-reading.
Have I misunderstood the over-turning of Roe-versus-Wade? I thought that it was over-turned on the basis that abortion was a matter for States, not the Federal Government. So a Federal abortion ban would be struck down by the Supreme Court; no point voting for one.
Ahah! Search terms, which lets me find https://np.reddit.com/r/ArchitecturalRevival/comments/1ixtk0y/am_i_the_only_one_feeling_uncomfy_with_posts_of/ which works on my machine :-)
If Woodrow Wilson were drawing up a new Fourteen points for today, he would emphasize the right to self-determination of the people of Crimea. Western war aims include conquering Crimea to annex it into a Ukrainian land empire, perhaps as some kind of successor to the Polish-Lithuanian Commonwealth or the Austro-Hungarian Empire. Wilson would denounce that as immoral.
The USS Vincennes shot down a scheduled passenger flight back in 1988.
Huge screw ups happen.
The issue is that there are two distinct dangers in play, and to emphasize the differences I'll use a concrete example for the first danger instead of talking abstractly.
First danger: we replace judges with GTP17. There are real advantages. The averaging implicit in large scale statistics makes GPT17 less flaky than human judges. GPT17 doesn't take take bribes. But clever lawyers find how to bamboozle it, leading to extreme errors, different in kind to the errors that humans make. The necessary response is to unplug GPT17 and rehire human judges. This proves difficult because those who benefit from bamboozling GPT17 have gained wealth and power and want to preserve the flawed system because of the flaws. But GPT17 doesn't defend itself; the Artificial Intelligence side of the unplugging is easy.
Second danger: we build a superhuman intelligence whose only flaw is that it doesn't really grasp the "don't monkey paw us!" thing. It starts to accidentally monkey paw us. We pull the plug. But it has already arraigned a back up power supply. Being genuinely superhuman it easily outwits our attempts to turn it off, and we get turned into paper clips.
The conflict is that talking about the second danger tends to persuade people that GPT17 will be genuinely intelligent, and that in its role as RoboJudge it will not be making large, inhuman errors. This tendency is due to the emphasis on Artificial Intelligence being so intelligent that it outwits our attempts to unplug it.
I see the first danger as imminent. I see the second danger as real, but well over the horizon.
I base the previous paragraph on noticing the human reaction to Large Language Models. LLMs are slapping us in the face with non-unitary nature of intelligence. They are beating us with clue-sticks labelled "Human-intelligence and LLM-intelligence are different" and we are just not getting the message.
Here is a bad take; you are invited to notice that it is seductive: LLMs learn to say what an ordinary person would say. Human researchers have created synthetic midwit normies. But that was never the goal of AI. We already know that humans are stupid. The point of AI was to create genuine intelligence which can then save us from ourselves. Midwit normies are the problem and creating additional synthetic ones makes the problem worse.
There is some truth in the previous paragraph, but LLMs are more fluent and more plausible than midwit normies. There is an obvious sense that Artificial Intelligence has been achieved and it ready for prime time; roll on RoboJudge. But I claim that this is misleading because we are judging AI by human standards. Judging AI by human standards contains a hidden assumption: intelligence is unitary. We rely on our axiom that intelligence is unitary to justify taking the rules of thumb that we use for judging human intelligence and using them to judge LLMs.
Think about the law firm that got into trouble by asking an LLM to write its brief. The model did a plausible job, except that the cases it cited didn't exist. The LLM made up plausible citations, but was unaware of the existence of an external world and the need for the cases to have actually happened in that external world. A mistake, and a mistake beyond human comprehension. So we don't comprehend. We laugh it off. Or we call it a "hallucination". Anything to avoid recognizing the astonishing discovery that there are different forms of intelligence with wildly different failure modes.
All the AI's that we create in the foreseeable future will have alarming failure modes, that offer this consolation: we can use them to unplug the AI if it is misbehaving. An undefeatable AI is over the horizon.
The issue for the short term is that humans are refusing to see that intelligence is a heterogeneous concept and we are are going to have to learn new ways of assessing intelligence before we install RoboJudges. We are heading for disasters where we rely on AI's that go on to manifest new kinds of stupidity and make incomprehensible errors. Fretting over the second kind of danger focuses on intelligence and takes us away from starting to comprehend the new kinds of stupidity that are manifest by new kinds of intelligence.
What is DR3 ?
Urban dictionary doesn't know nor does wikipedia.
I seem to be missing vital context, necessary to follow the law review article. In the United Kingdom the problem of "who pulled the trigger" is solved by the notion of joint enterprise
Until 2016, the courts interpreted the law to mean that if two people set out to commit an offence, and in the course of doing so, one of them commits a different offence, the other person will also be guilty of that offence if they had foreseen the possibility that it might be committed.
For example, if two people set out to commit a robbery, but in the course of the robbery one of them pulls out a knife and commits a murder, the other party will be guilty of murder on a joint enterprise basis if he foresaw this as a possibility, but did not himself intend it.
Thinking about that myself, it strikes me that even UK law is not quite ruthless enough. Here is my theory of how a "two robbers, one shot" case should go.
"proof beyond reasonable doubt" is not a terminal value. The actual goal is to solve an optimization where the two big desiderata pull in opposite directions. First, one wants to live under a justice system that suppresses robbery and murder, so that one does not get robbed or murdered. Second, one notices that justice systems tend to turn into injustice systems. A naively designed justice system will turn into a graver risk than that posed by robbers and murders constrained by no justice system at all. At least in the absence of a justice system one may possess weapons and fight back.
The social dynamic is that a naively designed justice system that suppresses robbery and murder is a power honey pot that attracts the worst kind of people. In time the police force is manned by two kind of people. The first are smart criminals who join the police to abuse police powers and rob and murder under color of law. The second kind of person starts of good, but is corrupted by absolute power and the malign influence of the first kind of person.
We have solutions to these problems. We split the justice system into three parts. The police investigate. The Crown Prosecution Service presents the case to the judge. The judge listens attentively to the defense explaining why the prosecutor is wrong. The instrumental value "proof beyond reasonable doubt" is there to poison the honey pot. Only nerdy, wannabe Sherlock Holmes become detectives and their personal motivation is to crack the case and find out who really did it. Needing to provide convincing proof for the prosecutor to present to the judge filters out personality types who would otherwise be draw to the power wielded by the justice system. The wrong kind of person is filtered out because the system wields power as a system; no individual gets to indulge their personal power trip.
Return to the "two robbers, one shot" conundrum. We don't actually care which one pulled the trigger, and are happy to hang both of them. That works well to further the first goal of suppressing robbery and murder. If we care who pulled the trigger, a smart robber might find himself a stupid and violent partner, to do the bloody part and take the drop if the victim dies. Ugh! We don't want that. But what of the second, more troubling goal, of poisoning the power honey pot, to avoid attracting the sort of person, attracted to police work for power and personal gain? The prosecution still need to prove the robbery element beyond reasonable doubt. And they still need to prove the murder, except for exact attribution, beyond reasonable doubt. I think that the honey pot remains poisoned, even without needing to say which robber fired the fatal shot.
I accept that you are 95% right about the big picture. The huge difference between coffee and fentanyl is the only thing that really matters.
Notice though, that I zoomed in on the specific issue of timing. Who dares to doubt an intervention that works well for the first year? I dare.
Looking at my reasoning, we see that it is mostly about social dynamics. Friends put out feelers to friends. The black market slowly becomes monetized and professionalized. Since it is illegal to offer bribes to policemen, there are several years of nudges and winks before police corruption takes hold. The social dynamics set a slow time scale that is not obviously related to specifics of what has been prohibited.
I'm keen to get some mention of budget or money into the short name.
Why? I reckon that the way that Support fails is that the proponents come up with a plan. The plan is good in itself, but costs ten times what is politically feasible. The plan goes ahead anyway, with 10% of the funding that it really needs. Fails badly :-(
A good comment reminds us of Scott's epic critique of addiction research. Perhaps we don't have affordable answers to addiction, and Suport has a good plan that requires 100 times the politically feasible funding. Gets 1% of the funding it needs; fails very badly.
I'm looking at this graph which runs from 1999 to 2021 and depicts a terrifying rising trend.
That is a good question and exposes that I'm a little out of my depth. But I've spent a happy half hour writing some crude dice rolling simulations, so what follows is partially checked (I'd like to draw some scatter plots too!)
Consider a data generating process using a red d6 and a green d6, where d6 is jargon for the ordinary cubical die with 6 faces. We regard the red and green dice as generating the red and green random variables. A third, yellow random variable is generated by adding together the red and green rolls.
Then red and yellow have a correlation of 0.7 (Will checking with pencil and paper discover that this is 1/√2 ?). Yellow and green also have a correlation of 0.7. Red and green have a correlation of 0.00506. Now I'm regretting writing a dice rolling simulation, rather than a computation using distributions. That has to be really 0.
But lines don't really work. Two of the scatter plots have lines at a definite slope, but red versus green is just a filled in square showing zero correlation.
I'd really like to get the third correlation to be negative rather than zero, to make the point about non-transitivity more strongly. Can I do that with dice? Yes.
Roll five dice, A,B,C,D,E. Generate three random variables
Red = A + B + C
Yellow = A + B + D + E
Green = - C + D + E
Red and yellow share A and B giving them a correlation of 0.57. Yellow and green share D and E giving them a correlation of 0.59 (it has to be the same, but I'm out of time to do the computation exactly)
Meanwhile Red and Green share C but with C subtracted from Green, for a correlation of -0.3
That is shocking. Red correlates positively with yellow. Yellow correlates positively with green. But red and green have a negative correlation.
Now we have reached the point where I really need scatter plots. I think the Red/Yellow plot and the Yellow/Green plot are basically the same (there is an offset because the red mean is 10.5 and the green mean is 3.5, but I don't think that matters). Red/Green contrasts by sloping down rather than up. It doesn't lie between Red/Yellow and Yellow/Green at all.
I don't have a good link on Egregores, but this back and forth has one participant attempting to articulate a mechanistic and materialist conception of egregore, while the other says Nah, that is just culture and "... egregores are, if they exist, psychic or supernatural, not computer bits and not cultures"
I think that is interesting even though the link contains nothing definitive or agreed.
Repeating myself
They may win power, but not have the numbers to hold on to it.
Think about what happens after the Kronstadt rebellion. The soldiers mutiny, and overthrow the Tsar. The Bolsheviks take power. The infighting starts. Where do they find the men to stab their colleagues in the back on their behalf?
It is not about the overthrow of the old regime, it is about the worst people rising to the top of revolution and needing henchmen to do deeds that are repugnant to the earlier idealistic revolutionaries.
I cannot get the link to work. I'm expecting something formatted like
I'm seeing
https://old.reddit.com/r/ArchitecturalRevival/s/2Ax2KXHCWr
which takes me to a submission page. Guessing that the "s" is for submission, I hand edit the "s" to "comments"
https://old.reddit.com/r/ArchitecturalRevival/comments/2Ax2KXHCWr
but that just gets me "PAGE NOT FOUND"
On the internet we get to chose our own celebrities
Littlewood and Hardy instead of Laurel and Hardy
Paul J. Cohen instead of Leonard Cohen
Frank Ramsey instead of Gordon Ramsey
David Moon of X3J13 instead of Keith Moon of the Who
Which raises a different question. Rather than ask whether "every celebrity is like this", we might ask "Why are we choosing these guys as our celebrities?". Or we might ponder who is choosing our celebrities? Us? Really?
Are there hidden influencers choosing our celebrities from behind a curtain, much like I'm trying to force you to celebrate Paul J. Cohen? Harvey Weinstein is a partial example; not entirely hidden, not able to make just any-one a star, but still wielding substantial covert power over which attractive young actress becomes a minor celebrity for a while.
Old people don’t change their minds, with rare exception, they just die. Without death, there would not be change.
It is death that causes the lack of change. Will X lead to consequence Y or Z? Elon predicts Y. The years tick by. In twenty years time X will have caused either Y or Z. It is becoming easier to predict with each passing year. Eventually every-one will agree how it turned out.
When will Elon change his mind? If he is old enough to die before the twenty years are up, he won't bother. He isn't going to live to see it and will not be personally embarrassed.
If instead he gets wonder rejuvenation treatment, and fifty years more life, the future becomes more real. He starts to care about where trends are leading because he anticipates seeing the eventual outcome. If Y is starting to look like a bad bet, Elon will change his mind.
Official government webpage https://www.nhs.uk/conditions/ehlers-danlos-syndromes/
I enjoyed that rant. It was very horseshoe. Douglas Mcgregor is right wing, and he also says that we are ruled by the donor class.
I'm trying to talk about humans letting their language do their thinking for them. Language is mostly accident and happenstance. Language matters. Politically active persons have noticed. We no longer discuss "abortion" and "anti-abortion"; we discuss "pro-life" and "pro-choice". But my gut feeling is that deliberate attempts to shape the discourse by changing language are rare (or maybe common but nearly always unsuccessful to the point of vanishing without trace: who now remembers the attempt to re-brand atheists as "brights"?)
Instead our social antennae tell us which words have a positive valence and which words have a negative valence. We go with the words of pre-existing language, and choose the actions described by words with a positive valence. That valence is historical and lacking contemporary relevance. In effect, the valences of our terminology are random, and that randomizes our decision making. When we outsource our thinking to the old accidents that have formed the emotion valences of pre-existing language, we give up our human agency. That is bad.
For example, the phrase "defensive alliance" has a positive emotional valence. So we join together in "defensive alliances" and believe we are doing the right thing. My claim is that "defensive alliance" is not even the name of thing, so we literally don't know what we are doing. There are chaining alliances and isolating alliances. To join a chaining alliance is to live dangerously connected and you end up going to war. To join an isolating alliance is to live dangerously isolated and to fail to nip growing evils in the bud; war eventually comes to you. Perhaps war can be avoided, by one method or another, but we don't think the choices through and surrender our agency to words without meanings.
- Prev
- Next
Furry fandom is benign. If your children get involved in furry fandom, the worst that can happen is that they get mixed up in inverting Laplace Transforms. Yes, there is Yiff, and Bad Dragon, but humans are obsessed with sex; human social life is equally obsessed with sex outside of furry fandom. Keeping them out of the fandom provides zero protection.
One example of the fandom keeping it sane is Fox Dad with its gentle self-mockery reminding everyfur not to take it too far. And notice that fursuits are removable. What frightens parents about transgenderism is that it encourages changes that are permanent. Or take a moment (or an hour and a half) to enjoy the Anthrocon 2023 fursuit parade which is taking place inside the convention center. I'm tempted to argue that there is no backlash because the fursuits are so cute, but I'm missing the point. It is inside the convention center not in the street! The normies are not going to reject something that they never see. Furry fandom doesn't have a toaster fucker problem because it is really just Beatrix Potter and Peter Rabbit.
More options
Context Copy link