domain:acoup.blog
Of course, the real situation is quite chaotic and a lot is based on a pile of ad-hoc decisions. FDA got lucky with Thalidomide - if you are very slow in approving everything, sooner or later some bad stuff will slip by faster regulators and it's time to uncork the champagne! - so they took it as the confirmation that not approving stuff is much better than approving stuff for many years after. But it's not based on some well-founded general scientific truth. And definitely doctors and what they think - at least if we're talking about rank and file doctors who actually talk to patients, not the ones that spend their days sitting in committees - have pretty little input into it and very little power in the process, as far as I know. It's not like "doctors were ruling it with their awesome knowledge and Trump came and started banning life-saving stuff because he's evil". It's "doctors did what FDA said to do before, and keep doing it under Trump, and will continue to do so long after Trump is forgotten".
I understand your experience. There is something strange about certain versions H1B culture - zero pride in work, zero interest in getting something done in a final in production sense. It's like the only goal is just to generate more work - good, bad, repetitive, doesn't matter - so that the billable hours stay strong.
I just can't imagine the mentality of this. Zero personal pride, zero interest in personal development, hyper autist levels of emotional disinterest in other people.
That has long been the case. There are a number of medications which are approved in Europe but not in the US, for example.
While true this gets considerably confusing. Sometimes the more expensive drug is approved in the U.S. sometimes Europe. Sometimes the "more dangerous" drug gets approved here, sometimes there. Political considerations of all kinds pop up (like childhood vaccines). It gets weird.
Compounding matters is the fact that sometimes things are not approved for an FDA indication, unlikely to get approved by insurance, unlikely to get approved by your hospital/pharmacy, scheduled, totally legit but 100% sure to get you sued if anyone complains and so on...
My favorite example is Gabapentin, which has thirty seven million off label uses but only two official uses - and 9/10 competent physicians will get it wrong if you ask them.
There are two companion articles of late that I'd add to comment on this.
This one is pretty short and to the point. LLMs, without any companion data management component, are prediction machines. They predict the next n-number of tokens based on the preceding (input) tokens. The context window functions like a very rough analog to a "memory" but it's really better to compare it to priors or biases in the bayesian sense. (This is why you can gradually prompt an LLM into and out of rabbit holes). Crucially, LLMs don't have nor hold an idea of state. They don't have a mental model of anything because they don't have a mental anything (re-read that twice, slowly).
In terms of corporate adoption, companies are seeing that once you get into complex, multi-stage tasks, especially those that might involve multiple teams working together, LLMs break down in hilarious ways. Software devs have been seeing this for months (years?). An LLM can make nice little toy python class or method pretty easily, but when you're getting into complex full stack development, all sorts of failure modes pop up (the best is when it nukes its own tests to make everything pass.)
"Complexity is the enemy" may be a cliche but it remains true. For any company above a certain size, any investment has to answer the question "will this reduce or increase complexity?" The answer may not need to be "reduce." There could be a tradeoff there that actually results in more revenue / reduced cost. But still, the question will come up. With LLMs, the answer, right now, is 100% "increase." Again, that's not a show stopper, but it makes the bar for actually going through with the investment higher. And the returns just aren't there at scale. From friends at large corporations in the middle of this, their anec-data is all the same "we realized pretty early that we'd have to build a whole new team of 'LLM watchers' for at least the first version of the rollout. We didn't want to hire and manage all of that."
TLDR for this one: for LLM providers to actually break even, it might cost $2k/month per user.
There's room to disagree with that figure, but even the pro version of the big models that cost $200+ per month are probably being heavily subsidized through burning VC cash. A hackernews comment framed it well - "$24k / yr is 20% of a $120k / yr salary. Do we think that every engineer using LLMs for coding is seeing a 20% overall productivity boost?"
Survey says no (Note: there are more than a few "AI makes devs worse" research papers floating around right now. I haven't fully developed my own evaluation of them - I think a few conflate things - but the early data, such as it is, paints a grim picture)
I'm a believer in LLMs to be a transformational technology, but I think our first attempt with them - as a society - is going to be kind of a wet fart. Neither "spacing faring giga-civilizaiton" nor "paperclips ate my robot girlfriend." Two topical predictions are 1) One of the Big AI companies is going to go to zero. 2) A Fortune 100 company is going to go nearly bankrupt because of negligent use of AI, but not in a spectacular "it sent all of our money to china" way ... it'll be about 1 - 2 years slow creep of fucked up internal reporting and management before, all of a sudden, "we've entered a death spiral of declining revenue and rising costs."
Why would anyone let their kids just hop onto these websites without doing basic due diligence or educating them on the reality of predators?
See downthread. There is a pervasive culture of letting your kids have unrestricted Internet access, it's hard to change, and anybody going against it will be seen as overly paranoid.
If a platform provides robust parental controls then they've done enough, full stop. The baton of responsibility is passed.
They should also ban pedophiles when they are reported and proactively look for them too.
Most office work is fake email jobs. 10x'ing the productivity of fake is still fake. I do think that AI is going to roar on the margins. But the average office working is doing very little productive in the first place.
A lot of AI is helping write emails on the front end, and then summarizing them on the backend. Nothing of value is being added.
So you are high-openness in all the ways that doesn't involve interacting with people.
What you are describing is a very narrow set of dogmas that are heavily enforced in Anglo academia, and to a lessening degree as you move further away from the Anglo academia. Many educated people in these circles will indeed be quite good at reciting the dogmas and recognizing that blasphemy is being committed when a dogma is directly challenged. However a very select few 1) can actually work out how the dogma applies to their real life and 2) detect acts or words that undermine tenets of the dogma without explicitly challenging it. This is absolutely crucial to understand if you want to socialize with this crowd (i.e. Western PMC, but not literally sociology grad HR lady) without giving in to their class dogma.
Just... don't talk abstractions. It will almost never come up. If you find the opportunity (typically when in small groups of high-T men), you can bring up some specific minor heresy points (crime, Joe Rogan, taxes etc) and you will quickly get a sense of which ones are true believers and which ones are just pretending to avoid any trouble. Even the most true believers usually cannot tie specifics to the dogma on their own and will easily commit heresies. That is why there needs to be HR commissars at every corner. Absolutely avoid discussing abstractions until you create some rapport and understanding over the minor heresies. If you are not very good at figuring out who to trust, just give small signals (i.e. deniable jokes) and let the more confident take the steps. You are hardly the first person who likes politically incorrect jokes. This is a basic but very effective formula.
Overall I suggest you to stop imagining yourself to be the first person to discover Twitter, or a lone-wolf in a debate club.
What is the alternative to an opt in system?
Banning pedophiles from the platform as soon as they are reported, and proactively looking for them too.
How about parents take responsibility for their kids instead of imposing restrictions on all the rest of us because of their laziness.
Easier said than done. There is a pervasive culture of letting your kids have unrestricted Internet access, and I have a feeling that any parent who goes against this norm and, for example, stringently monitors their access or even prohibits them from using the Internet entirely (because arguably, kids shouldn't even be on the Internet at all) is going to get looks from other people, or at least their kid will say "Billy gets to use the Internet, why can't I?" Unless everybody in a community agrees that the Internet is too dangerous for kids to be used unsupervised, reasons like "but predators are online" sound a lot like "I don't let my kid outside after 3 o'clock because a stranger might come and snatch her."
The true benchmark is GDP. If LLMs truely can boost productivity in most tasks except for pure manual labour we would see GDP roaring. If we saw a 10-40% increase in productivity among the majority of the labour force it would be like an industrial revolution on steroids. We are seeing lackluster economic growth, clearly production isn't booming.
Some how tech has the ability to radically change how people work and provide people with amazing tools without boosting productivity much. We went from type writers to Word to cloud services that allow us to share documents instantly across continents. We really haven't seen a matching boom in productivity. The amount of office workers wasn't slashed with the propagation of Email, excel, google search or CRM-systems.
So then would you agree that Roblox has said or implied "vigilantes" are just as bad as predators?
I think they are thinking Project Manager or Customer Pleaser-type stuff. Which does tilt mostly female from what I have seen but isn't super automatable yet.
I edited it on him. He is. Correct
Why else would they entertain weird nonsense from a stranger unless they're getting something out of it?
It's not just currency. They can want the approval of adults, the satisfaction of curiosity, or simply somebody to talk to. Many groomed kids have a tumultuous home life and are extremely lonely, for example. My point being that it shouldn't be thought of in pure economic terms, so predators doing this to children is (or should be) unacceptable and predator catchers finding predators this way is (or should be) acceptable.
If this was the immovable force you assert it is we wouldn't have this problem, since in that case children would always listen to authority figures that tell them not to do this.
Again, it's oversimplified to think this way. Being impressionable goes both ways. Sure, some will listen to authority figures and not do this. But some will not, for many reasons such as a preconceived distrust of authorities, being curious thinking "what could go wrong", or simply not knowing of the dangers.
And this is unique to online gaming... how, exactly?
It's not. But Roblox is uniquely refusing to ban predators when people report them.
The mitigations around it can't be solved for through technological means alone.
Yes, but you can just ban predators who are reported to do this. You can at the very least also not ban people who find predators and get them arrested in real life.
While it may be true that Roblox should ban people more frequently, that wouldn't actually fix their PR problem
It's not just "ban people more frequently", it's not banning people who find predators too. I feel like the latter is the biggest cause of their recent PR problem. Their tendency to not ban predators has been reported and documented before but it hasn't caused a huge media circus. Going after people who find predators is just a huge WTF moment.
I do agree that it's impossible to rid platforms of pedophiles before they strike but I'm willing to bet a lot of money that if Roblox suddenly reversed course, unbanned Schlep and started banning reported pedophiles, that their PR problem would virtually disappear overnight.
@OracleOutlook fixed it after my comment. It was originally a double-negative.
I’d be curious to know what type of businesses that 5% were used at. It might be good for things like writing boilerplate news and bad at ad copy. It might be good at picking up trends in engineering and business to business stuff and not so good at picking the new fashion trends.
I guess this calls for a Thunderdome
I am studiously silent on whether you could replace me entirely with one
I can't find the paper but I was linked recently to a study illustrating that generative AI performance on medical content drops precipitously when "not one of these" is added to the answer list and used.
We aren't dead yet.
I've ping-ponged between countries at one point, not for work but to fulfil other obligations. It got tiring and I got fed up at many points, yet I still somehow romanticise the idea.
The highly wistful bent of the third paragraph isn't meant to say "travelling is great" but to illustrate the strong compulsion I feel towards doing it in spite of the bits that aren't great. Which comes back to the idea of adaptation promoting certain behaviours.
The same way ‘woman/minority owned business’ fraud does.
Wait, what happens if you don't hit the minimum? Is there some kind of penalty that's worse than just burning tokens to hit the minimum?
Wouldn't is correct.
"Only enter information you wouldn't mind being leaked"
how many companies can directly turn voice synth
I recently called a plumber that had an AI receptionist pretending to be human, so at least one.
Something about it was so off-putting, though, that I ended up calling somebody else.
Oh wow she just now dropped a new video laying out a lot of her theory in one place.
It seems like this is something of anti-Yakub theory, a modern well-informed reaction to it that proposes the actual proximate mechanism for creating white people: heavy metal poisoning causing DNA damage causing albinism. This was initially an accident, then may have been done deliberately (unclear by who or why), and is now starting to happen accidentally again to both Black Americans and to Africans (who are, I gather from other videos, unrelated to each-other). She's very mad at people who are still insisting on the old theory that white people came from (Yakubian?) genetic engineering mixing animals with human DNA, which she's disproven.
So it all started in Tanzanesia, where artisinal mining exposes everybody to heavy metals accidentally causing birth defects like albinism. There is still some further level of deliberateness here (there's a "they" doing it) which I don't understand. But pale-skinned people are actually a very recent development and come from this Tanzanesia heavy metal poisoning. Albinos are still to this day treated badly there, which is why white people hate black people: in revenge for their recent memory of being abused and mutilated as albinos by Africans. And it's starting to happen to Black Americans too because seafood boils are exposing them to heavy metals causing vitiligo.
She doesn't actually get in to the Native American stuff in this one, but my head-canon is that it's something like: everybody everywhere was black, but then everyone other than Africans and (completely seperately with no relation) the real Native Americans (Black Americans) got this heavy metal poisoning and turned pale. Then at some point later the fake Native Americans (Siberian/Asian, she mentioned this in another video I skimmed but I'm not going to find it again) invade or are imported or something, and invent/are forced to believe the lie that they were there first.
From some screenshots she included, it seems at least some of this is LLM-potentiated.
Twitter is advertising Scottish HIV testing, shortly after I visited a gay bar. Does it know something I don't?
As far as I know, Coinbase still allows you to withdraw your coins to your own wallet whenever you want.
So you can, for most pursuits and purposes, think of those coins as 'yours' if you wanted to take them.
Unless your suggestion is that he find some dude to sell him BTC for cash, I dunno how else he would come to acquire the coins in his wallet.
More options
Context Copy link