site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 323187 results for

domain:anarchonomicon.substack.com

No, but please document your progress if you take it up, and post hints yourself. It's one of the hobbies I was considering myself.

I’m getting into carving/whittling, mainly because I want an offline hobby that keeps my hands busy and frees up my mind to wander. I mainly want to make small 3D animal figures as gifts for friends, does anyone have any experience/tips for a beginner?

The people with power are mostly white. Ergo white people DO have that ability. Not necessarily ALL white people (though see below). If a subset of white people is the problem, then that is an intra-racial issue.

Nope, just because the people in power are white does not mean "white people generally have the power of enabling that to happen". The statement is insanely racist and would not be allowed for any other group.

No one understood your statement to mean "ALL white people", so I don't know what's the point of that part of the response.

If white voters in the US REALLY wanted to limit immigration above all else they do actually have the power to do so. They just have to repeatedly vote for the people who want to do so

The fact that we didn't have to have the supermajority of white people repeatedly vote for unlimited immigration (I'd say "even when the economy is good", but the connection of immigration and a good economy is essentially made up, or the causality is outright reversed), clearly shows that someone has more power than "white people generally".

That's without mentioning the fact that there's absolutely no evidence that repeatedly voting this way would actually achieve the goal.

Well, I wouldn't use intentionality for bots at all. I think intentionality presupposes consciousness, or that is to say, subjectivity or interiority. Bots have none of those things. I don't think it's possible to get from language manipulation to consciousness.

At any rate, I certainly agree that every ideological person believes untrue things about the world. I'm not sure about the qualification 'for instrumental reasons' - I suspect that's true if you define 'instrumental' broadly enough, but at that point it's becoming trivial. At any rate, if you leave off reasons, I am confident that every person full stop holds some false beliefs.

That doesn't seem like the same thing to me, though. Humans sometimes represent the world falsely to ourselves. That's not what bots do. Bots don't represent the world to themselves at all. We sometimes believe falsely; they don't believe at all. They are not the kinds of things capable of holding beliefs.

I think translating code is probably a sensible thing to use a bot for - though I'm not sure it's fundamentally different in kind to, say, Google Translate. I grant that the bots have impressive ability to general syntactically correct text, and I'm sure that applies to code as much as it does natural language. In fact I suspect it applies even more, since code is easier than natural language.

I am less sure about its value for looking up scientific information. It is really faster or more reliable than checking Wikipedia? I am not sure. I know that I, at least, make a habit of automatically ignoring or skipping past any AI-generated text in answer to a question, even on scientific matters, because I judge that the time I spend checking whether or not the bot is right is likely equal or greater than the amount of time I spend just looking it up for myself.

he point isn't that you tolerate fraud as in not police it, it's that you police it but you don't turn panopticon to go from 10 cases of fraud across the whole population to zero.

I don't know if this is quite right. It's not that high-trust societies police fraud just as intensely as low-trust ones, but decide not to got the final mile. They actually police fraud much less than low-trust ones, take people at their word and generally assume their good faith. This is kind of the definition of a high-trust society, and it's also been matched by my experience visiting them.

Common well publicized problems have common well publicized solutions, if your traing data consists of 90-somthing percent correct answers and reminder garbage you will get a 90-somthing percent solution.

As i said above Gemini is not reasoning or naive, it is computing an average. Now as much as i may seem down on LLMs, I am not. I may not believe that they represent viable path towards AGi but that doesn't mean they are without use. The rapid collation of related tokens has an obvious "killer app" and that app is translation be that in spoken languages or programming languages.

https://www.themotte.org/post/1160/culture-war-roundup-for-the-week/249920?context=8#context

That's a really good answer.

I suspect other factors would negate this effect. Such as selecting for self-motivated people who can afford to move and buy property. That filters out the listless and destitute. I predict a photo collage of these people vs equivalent income and age white Americans would not obviously show they are ugly losers, as comically shown in that link.

I keep inheriting MATLAB code at work. It is horrible. Can't use it in production since production computers are locked down linux machines that don't have MATLAB. I grit my teeth and do much my work in MATLAB.

BUT NOW, we have an LLM at work approved for our use. I feed it large MATLAB scripts and tell it to give me an equivalent vectorized Python script. A few seconds later I get the Python script. Functions are carried over as Python equivalents. So far 100% success rate.

This thing rocks. Brainless "turn this code into that similar code" tasked take a few seconds rather than an hour.

I had a thermodynamics issue that I vaguely remember learning about in college. I spent maybe a minute thinking up the best way to phrase the relevant question. The LLM gave me the answer and responded to my request for sources with real sources I verified. Google previously declined to show me the relevant results. I now have verified an important point and sent it and high quality sources to the relevant people at work.

It is not perfect. I had a bunch of FFTs I needed to do. Not that complicated. As a test I asked it to write me functions to FFT the input data and then to IFFT the results to recreate the original data. It made a few functions that mostly match my requirements. But as the very long code block went on it lost its way and the later functions were flawed. They were verifiable wrongly. It helpfully made an example using these functions and at a glance I saw it had to be wrong. Just a few hundred lines of code and it gets lost. Not a huge problem. Still an amazing time to results ratio. I clean up the last bit and it is acceptable.

I won't ask these things about potential Jewish bias in the BBC or anything like that. I will continue to ask for verifiable methods of finding answers to real material questions and reap the verifiably correct rewards.

However as time went on i largely gave up trying to discuss AI with people outside the industry as it became increasingly apparent to me that most rationalists were more interested in the use of AI as a conceptual vehicle to push thier particular brand of Silicon Valley woo

Well, I for one wish you hadn't given up, as I have the same impression, but it's only an impression. Would be interesting seing it backed by expertise.

They are, but the latest predictive models are a completely seperate evolutionary branch from LLMs

They were anti-Trump to begin with, so they're absolutely on the 'Trump's on the list' bandwagon.

I've got to say, I don't know what to think after the 'nothing to see here' answer to the press.

I would say none of either.

They think its a complete coverup, particularly the 180 turn of Patel and Bondi after an administration 'campaign promise' of sorts to get to the truth of things. Kash Patel's Joe Rogan appearance started the backpedaling with things like 'oh I didn't know the cell camera's were broken, but I've reviewed the footage'.

Are they fingering Trump too, or is it a case of good Czar, bad boyars?

Okay, so, this is all a fairly decent summary, but all it demonstrates is that the Democrat-Republican split basically failed to map in any coherent way onto a liberal-conservative axis well into the 21st century. You’re correct that Bob Dole and Jerry Falwell would have been horrified if their daughter had been caught dating Dimebag Darrell Abbott; however, a good mainstream 90’s liberal like Phil Donahue would be equally horrified, because Dimebag was the kind of guy who proudly displayed Confederate imagery. (And, again, he’d be far more mortified by his daughter dating Phil Anselmo, especially after seeing this clip of Phil throwing a Roman and shouting “White power!”)

And hell, even if you want to stick to country music and you want to claim Jennings as a “liberal”, how about guys like Travis Tritt? An openly Republican Bush-voting conservative, who had long hair and a beard throughout the whole period you’re referring to? I don’t think Southern guys at the time would have thought Tritt looked out of place at a honky-tonk — let alone that he looked like a leftist academic.

Basically what I’m saying is that beards and long(ish) hair could pattern-match to “working-class Southern man who drinks a lot and doesn’t act like Ned Flanders, but who also doesn’t like faggots or egghead professors” just as easily as it could pattern-match to “ex-hippie with proper NPR-approved beliefs” during the time period OP referred to.

I believe tha AGI is possible and is likely happen, but I also believe that Sam Altman is an inveterate grifter and the generative large language models are (for the most part) an evolutionary dead-end.

A farmer once told me "farmers run land management companies with a farming problem"

And my point is that anyone who was remotely intelligent and vaguely familiar with both the internet and how LLMs function ought to have anticipated this.

The OP is the kind of person who is surprised when "Boaty McBoatface" wins the online naming poll.

If this was true I have no idea how this didn't get him killed. There seems to be two outcomes. You go to jail, or someone is going to flip out because you didn't go to jail and murder you.

Again, these are correct signals that I am sending intentionally. This IS a major part of my life. I DO spend at least 25 hours a week on anime and games. If you are looking to do "all the other stuff" that isn't gaming and anime and squeeze it around then you're not my 1 in 1000 and I don't want to marry you. That just sounds like a recipe for constant conflict and strife. While some amount of compromise is important in a relationship, and you should sometimes do things the other person wants to do for their sake, the less it's necessary because you both want the same things, the better. If one person expects to go out and do things all the time and the other wants to stay home all the time then at any point in time only one of them is getting their way. So if anyone sees this and realizes that I'm not the right person for them because I'm literally not the right person for them then good, we can both save some time and try to find someone more compatible. In practice, this did turn into me getting very few hits for precisely that reason. Most women saw my profile, made this assumption about me (correctly), they thought this was a negative trait, and then they didn't want to talk to me. Mission accomplished.

Because one did want to talk to me. Instead of dating and/or marrying someone like that, I found someone with whom I get to keep doing videogames and anime and my wife will do them with me. Well, she doesn't care for anime that much, but we play lots of games together. Sometimes we're just sitting next to each other playing completely separate games and she'll giggle as the monsters die and it's adorable. And sometimes she'll want to go somewhere and do something and I'll suck it up and go because it's not very often, because she's mostly like me and genuinely wants to be at home most of the time.

you have to either water down "very large" from 70%+ down to like 30%

Yes, I meant on the order of 30%. That's not a majority, but it's large enough that you can't just assume that everyone in the world agrees with it. For the type of framing that OP used, I think you need the percentage of people to disagree with it to be on the order of the lizardman constant.

It's not "naive" it's generating an average. If your training data is full of extraneous material (or otherwise insufficiently tokenized/vetted) your response will also be full of extraneous material, and again its not rationalizing it's averaging.

Sure, but so does everybody else.

The simple fact is that there's like a 2:1 guy:girl ratio on the apps.

And the girls are much, MUCH more selective than the guys.

So the pool of women being limited is, inherently, the issue. Some guys will lose, guaranteed. Its not a traditional market where you can achieve mutual gains through trade.

And in a zero sum game, optimizing to try to win just makes it harder on everyone at once.

And it IS a zero sum game. Every guy that pairs off with a woman is making it harder for the remaining guys to get what they want.

So telling guys to optimize their profiles is just increasing the competitive pressure with very minimal change in success odds. Improving YOUR chances makes some other guy's chances decrease, and vice versa. If you both compete as hard as you can, most of the efforts are wasted for no real gain.

If I am not mistaken that wasn't the kind of post he was making. I would suggest you're responding to post along the lines of "We should figure out how to stop these disasters or bad things from happening to children." That may be a post worth making, but wasn't made by OP.