@confidentcrescent's banner p

confidentcrescent


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 03:38:01 UTC

				

User ID: 423

confidentcrescent


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 03:38:01 UTC

					

No bio...


					

User ID: 423

Would you feel more comfortable with this process if we were able to produce date that illustrates that patients admitted with homicidal ideation are equally or more likely to kill someone as felons?

This seems to be a more specific group than previously discussed, so I'm not sure why data on them would matter to a discussion of involuntarily admitted patients as a whole. I also do not agree with rights being removed at a statistical level. Temporary violations of rights without due process are unfortunately necessary, but for a more permanent removal a just system requires an individual and adversarial process.

Fundamentally we need to establish what level of problematic behavior disqualifies from gun use.

I'm more concerned about the (lack of) process here, but given it's a right I'd accept taking guns away at the same level which would justify locking them up for an extended time. If you wouldn't feel comfortable tossing them in a jail cell for their behavior I don't think it's bad enough to take their guns either.

my co-workers [...] aren't going to abuse the commitment process for political reasons

Leading doctors in the US recently tried to distribute scarce health resources (covid-19 vaccines) by race. If that was non-political then non-political covers a lot I would consider political. I am concerned that some doctors will involuntarily admit a person for the purpose of getting them away from their guns long-term (i.e. past the immediate episode), and your word isn't sufficient to convince me that they aren't willing to do this.

Your usual crazy schizophrenic homeless person wandering around on the street was deemed safe to go home. How bad do you think the ones who get dragged in are?

I was under the impression these people do tend to get occasionally dragged in and involuntarily committed, then are eventually let go again.

Frequently (by no means all the time but often enough) that's grossly insufficient.

Why? You seem to be asserting that a risk of someone having a repeat episode while having a gun is unacceptable. I do not agree with this; disagreement with this is a primary reason behind why I'm against gun control. Freedom means authority figures should have neither the responsibility or authority to stop people from making shit decisions.

The modal involuntary patient [is] something like a schizophrenic who is so severe they just can't feed or care for themselves. Someone that disorganized isn't safe to own anything remotely dangerous, and if they had the financial ability to own a car (most don't) they probably shouldn't.

I agree this person is not safe to let out with guns, but the guns are irrelevant. The person you describe is not safe to let out, full stop. Not with guns or a car or even just their own fists.

The fact of the matter is that the vast vast majority of people who are involuntarily committed* really should not be allowed to own guns. Failures are rare. Should you find one (for instance someone who did a shit ton of PCP for ten years and then spent 50 years not using PCP and wants some guns) the expungement process works pretty well.

I do have disagreements regarding the place of suicidal people here, but I'll put those aside.

I don't trust that all of this is the case currently or that it will remain the case. The particular case described in the OP already does not look like the expungement process working well and I do not expect this to improve. There is a large group standing right behind your reasonable safety concerns who wants any possible excuse to keep guns away from people, and given your previous top-level post I'm sure you're well aware that doctors' politics lean heavily towards that group.

You're thinking of this system in the hands of an impartial party. I am expecting this system to be in the hands of an anti-gun crusader sooner or later and want it hardened against misuse.

I'd like the guy to go home to his guns after the medication works. This is a lesser violation of his rights than either option you have presented and no more complicated or expensive than the current system.

I can't speak for Nybbler but I read his comments as indicating he wants the same.

Enormously more efficient for the store, maybe. As a customer my perspective is that they just moved the cashier's job to you and gave you a shittier and slower interface to do it with.

The local self-checkout I'm familiar with requires you to scan items one at a time, takes a second to check the change in weight after you put each item in your bag, and if anything goes slightly wrong in this process you need the cashier's manual intervention which takes at least a minute. I'll usually take waiting for a couple people in line over that awful experience.

This whole thing only looks more efficient because the store isn't having to pay the customers to do this.

Removing these conditions makes for a very different argument than the original, which aims to sidestep the question of evidence for God by providing a chain of reasoning that assumes no evidence.

I'm less interested in going into a debate over whether the real world provides evidence for or against a God, but I've yet to see an argument that has moved me away from a position of ignorance on the subject.

I don't see how it could be otherwise than that they cancel out exactly.

Deciding a course of action is more likely than another action to get those infinite rewards would require some knowledge of God, knowledge that the Wager specifically excludes. The only state which is logically possible in our state of complete ignorance is therefore one where every action is equally likely to lead to infinite rewards. This would cancel out the infinite reward part when deciding which of two actions is better, as both are equally likely to get you there.

How would you go about figuring out which actions are more likely to lead to infinite rewards in this situation? Whence comes this knowledge about the unknowable?

Pascal's Wager is compelling because it claims to prove a benefit through logic. For the argument to still hold, may be possible isn't enough. I also have opposite intuitions and would find it incredibly surprising if we could logically go from zero knowledge to greater than zero knowledge.

If what you mean by more than one religion/source of infinite concerns is the modern version of Pascal's Wager that doesn't specify a religion and just says you should pick one, that version is still assuming a limited list of religions rather than the unconstrained list of any possible religion that a state of zero knowledge would require.

How can you estimate the probability space on a thing which, as the wager argues, is fundamentally unknowable through reason? Shouldn't every possible God be equally probable in a situation of zero knowledge?

The wager only works because it smuggles in the assumption that it's Christianity or nothing, but this is an unproven assumption.

I have trouble make an intellectual steelman of the people who are angry in the comments.

It seems obvious to me that they're angry because Scott just described them [opponents of PEPFAR] as being too callous to save a kid drowning in front of them.

"Do this thing you disagree with or you're a terrible person" is basically tailor-made to generate angry responses.

Good catch. I got mixed up between his mention of half leaving and timeframe of "gone in six months" to get an annualized 100% turnover. I'll edit my post to note that.

50% annual is better and a year makes the timelines a lot more reasonable for someone leaving post-training, but those numbers are still concerningly high.

Acquisitions usually have a higher turnover rate than usual, but a 100% 50% annual turnover rate is insane very high and I'm very doubtful that half your workforce immediately bounced from a job they were happy with just because they got some experience. Some people will absolutely do this the first chance they get, but having half a company made up of these guys would have to be the worst luck I've ever heard of.

In the end it's just the internet so I can't fact-check what you say, but training your employees only brings so much goodwill and I suspect that either the acquisition made their job a lot shittier or the job already had high turnover and you got suckered on the purchase.

The timelines are also very short here which to me also points to something more than just people getting poached with a better offer. People who already have a job usually take some time to find a good option when looking for a new one. That 50% who left within half a year either started looking immediately or wanted out badly enough they took the first thing that gave them an offer. For some of them, probably both.

To get back to the point about H1B workers, your post shows the problem with H1Bs very well. Your company is unable or unwilling to provide sufficient benefits to retain US workers, so you turn to cheap foreign labor instead. This is bad for local workers' pay and erodes the local talent pool as less locals can find jobs where they can gain experience.

Edit: Lewis2 below pointed out that the turnover was 50% over a year (I read the post as 50% over 6 months). Edited to strike a section and make a couple changes, but I still think the rate is high enough to be indicative of other problems pushing employees away rather than them just gaining new options.

I'm confused as to how you see his earlier estimate as better at all, much less vastly better. Are you saying you'd be happy with a price tag which consists of the range $4000 to $1500000?

The chance of hitting the $1.5m upper end makes this price tag functionally identical to "idk, could be anything" for most people. Unless you have millions in assets then 1.5 million is already enough to ruin your life and put you in a place where you're probably staring down bankruptcy. Whether the cost caps out at 1.5 million or 1.5 billion is irrelevant to people who can pay neither of those numbers.

I suspect they are entirely jury-rigged to make the 1 look infinitely more prescient than the 25.

I think the format is just inherently rigged, and in the opposite direction you'd immediately think. The implication of a 1 vs 25 is that the 1 is in the worse position (as in a physical fight), but I think when you have the real-time debate format the inverse is true.

Consider what participating from the 25-person side looks like. Unless you have a very non-real-time debate, the 25 need to share time between themselves and also need to carry on each others' arguments. They need to defend positions another person raised and follow through on a path of attack another person started. And being real time removes the main benefit of having numbers, which is being able to workshop a response and pull out the best ideas from all 25. The result is an inevitable mess.

In practice I suspect the single guy only needs to be moderately consistent with his own statements, not get stumped for a response on anything, and do a passable job of poking holes in the jumbled political positions of 25 people combined in an ad-hoc manner. These aren't trivial tasks but they're well within reach for an experienced debater.

Not being able to persuade others to stop associating with a bad actor seems to be an overreach of a definition of cancel culture. Either you prohibit sharing anything negative or you end up trying to define what is a legitimate or illegitimate reason to call people out.

My opinion is that cancel culture operates at a level removed from the actual behavior you dislike and is more about interactions with third parties. To put forward my own definition: cancel culture is when you attempt to compel others who are not directly performing an objectionable behavior to disassociate from that objectionable behavior. Just trying to spread the reason you personally disengaged or persuade another party of the badness of a particular action is not cancel culture.

The type of behavior defined above is toxic because it puts people or organizations in the position of having to take sides in areas which may be completely unrelated to what they do. This is why many things are splitting towards either woke or anti-woke stances and neutral is becoming harder to find.

A real-world example of this is the idea that "if you have 10 people sitting at a table and one is a Nazi, you have a table of 10 Nazis." Cancel culture is the idea that some things are too awful to interact with in any way; not denouncing and disassociating from them is sufficient proof you hold objectionable ideas.

This definition doesn't require debates over sharing comments like "that reporter who you think is honest has published a bunch of lies" or "the leader of that animal welfare charity secretly kicks puppies" which I do not think any reasonable definition of cancel culture should prohibit.

As long as you can interact with someone who continues to read that reporter or support that charity despite what you consider bad behavior then you have a society where people with differing opinions can live and work productively together. While this does not prevent people from deciding as a group that they dislike a behavior, a world where people followed this rule would mitigate the worst effects where third parties are pressured into deplatforming while leaving the freedom for people to stop directly supporting things they find horrible.

From Scott's sets of examples, I think this definition would define as cancel culture A5 onwards (unsubscribing from content simply because it platformed actors you dislike), B1 (newspapers holding the university responsible for non-official behavior of an employee), and possibly C2 (holding Atlantic workers responsible could go either way for me depending on whether or not they're in an official capacity at the time). It also leaves me agreeing with P3.

the GOP voted against it because…?

The moment Republicans vote for the bill it will be touted as a bipartisan solution/compromise to fix the border crisis. Giving approval to the bill puts their reputation on the line and having it then fail to sufficiently fix the problem would catch them in the fallout and relieve pressure on Biden.

Most IoT devices are billed as, "You just plug it in, and it just works!" No one anywhere is standing at a store, looking at the baby monitors, seeing that one of the options lets them listen to it from their phone, and thinking, "Ya know, I really better not think about buying this and plugging it in unless I become an expert in network security."

Let's say you were in charge of fixing this from the advertising side of things. What warnings would you add to this device so that even tech-illiterate users understand the risks of e.g. connecting this baby monitor up to the internet? Simple stuff you can fit on a pop-up or side of the box, because the user isn't reading the 100-page manual that probably already warns about this.

A big part of why you can just hand a toaster to someone with no further explanation is that people actually do know a lot about electricity and household appliances and can avoid the biggest problems. Nobody's dumping a live toaster into the sink to clean it.

Manufacturers should probably take this lower level of knowledge into account, but it's not as easy as "just make the device idiot-proof, like toasters!"

Practically hosting your own email is basically impossible, from what I can tell, due to spam blocking mechanisms.

If you haven't already done so, look into paying for a domain name and email hosting. There's a bunch of companies which sell these services, and owning the domain lets you change which is providing your email while retaining the same address. It's not all the way to hosting your own email but it sounds like it could be close enough for the problems you're worried about.

Do we know this is the case? Up until now I had assumed these tests got abandoned because much of the predictive power of the test was based on people not knowing how to approach this novel situation and having to figure it out on the fly. Once the secret got out and people learned a script for this type of question, answering got easier and more routine and the tests ceased to be such a good indicator.

I recall the timeline of the tests going away being shortly after everyone on the internet started talking about Google interview questions and specifically questions of this type, and I don't think that was a coincidence.

e.g., regarding quotas, apparently MSP expected 100 stops per month per trooper. That's 5 per shift. Let me ask you something: how do you think police supervisors should deal with a trooper who, upon review of his shift, has been sitting under an overpass all day making zero stops and playing Angry Birds on his phone?

By disciplining him for not working during his shift, which has nothing to do with the number of stops and everything to do with him ditching work to play games on his phone.

You might object that measuring this is unreasonably hard and that measuring stops is a reasonable proxy to check for that. I disagree.

You can check electronic surveillance, which many police departments are already moving to for other reasons. Body cameras, car cameras, and car GPS systems are a lot more common and any one of these should make it trivial to check if a police officer is doing nothing all day.

If for whatever reason you don't think these tools are sufficient to identify police abandoning their jobs, there's another option that works for any job where workers have overlapping skill sets. You can switch up who does what work. Put the officer who think isn't working on a route where you know other officers regularly make many stops. Rotate a few officers who you know do good work to cover his route. If the pattern of few stops follows the officer who you're suspicious of, that's good evidence that he's not doing his job well enough.

an extremely harsh login wall that even Elon Musk has maintained, so even scrolling down on someone’s Twitter forces an unblockable pop up demanding sign-in

Just an aside, but I think that has actually been removed now. I don't have a twitter account and I haven't seen the login wall in months.

While I do lean towards the skeptical side about how far AI capabilities are going to get long-term, the main goal was to deflate a bit the exaggerated OpenAI claim about current performance that seems to have been cautiously taken at face value so far. Like some others in this thread I found the claim a bit unbelievable, and I had some time to dig into where it came from.

GPT might get good enough to compete with lawyers in the future, but the study doesn't prove that it's there now. In fact, things like needing the exam adjusted to provide each question separately strongly indicate the opposite.

I seem to remember similar arguments being made when Kasparov lost to Deep Blue.

I'm not familiar with them, maybe you could give some examples?

GPT-4 can pass a bar exam

…after a bunch of lawyers rewrite the questions and have it repeat the test multiple times with different settings and questions.

That's what you'll find if you read the paper this claim is based on, and this significantly diminishes the impressiveness of the results. A model that only gets results when led around carefully by a skilled human is more like a fancy search engine than the rosy picture of an near-human independent operator that the press releases paint.

Having questions rewritten by another person is almost certainly not allowed in the bar exam - the idea that someone who can understand legal principles and jargon can't comprehend a three-part question is laughable. And taking multiple tries at the same exam to get a better score is definitely out.

In my opinion, a reasonable claim that GPT can pass a bar exam would require demonstration that their chosen parameters generalize to other bar exams and the model would need to be able to answer exams without needing questions to be re-formatted.

Right now this claim looks like false advertising.

P.S. Did you know that the bar exam results were marked by the study authors? Or that all four authors of the study work in one of two companies planning to deliver products applying GPT to law?

I know abstractly that the statistics work out, but it feels viscerally disenfranchising.

It sounds to me like your instincts are picking up the increased potential for someone to sneakily cheat under these systems. As a voter, can you tell that the coin toss you're making is fair without referring to outside expertise? If it takes an expert to make the determination that part of the system is working correctly, it gets much easier to cheat.

Would answering that question not require grappling with the "dozens of other problems" raised by gun advocates?

We can't assume that any gun would work in a situation just because shots weren't fired. The threat criminals are reacting to is based on what would happen if the gun their target has (or is likely to have, if they run before they identify the gun) is fired at them. This threat will change once you restrict the guns people are allowed to have to a less dangerous variety.

I can't think of a good way to directly measure whether criminals find a certain gun a sufficient threat to be deterred. Debating stopping power, limited capacity, and other such issues seems like the best proxy we're going to get for whether a gun is a sufficient threat to deter a would-be predator.

then the deterrent factor is mainly because of the mere presence of the weapon and not its actual utility, because few perpetrators supposedly stick around long enough to get shot.

The problem is, that all those arguments assume that the gun is actually going to be fired, and since that's a statistically slim possibility, it's irrelevant.

If how effective the gun is doesn't matter, why not go all the way and require everyone to carry unloaded guns? Obviously this would not stop anything, showing that it is necessary for the weapon to be effective to deter criminals.