@fmaa's banner p

fmaa


				

				

				
0 followers   follows 0 users  
joined 2022 September 17 17:51:56 UTC

				

User ID: 1241

fmaa


				
				
				

				
0 followers   follows 0 users   joined 2022 September 17 17:51:56 UTC

					

No bio...


					

User ID: 1241

This is the usual facile reply to complaints about the ugly architecture of the 20th and 21st centuries. Uh, you complain about the ugliness of modern brutalist buildings, but actually modernism and brutalism are separate architectural movements, and the current-year trend of ugly concrete boxes and geometric turds is called something else.

You don't need to be versed in the jargon of an insular artfield to criticize its output, especially for architecture where this output is forced upon millions of unwilling victims to suffer daily.

Treating bureaucrats and architects like malicious genies might not be wrong, but would hardly make them blameless. Though really the most fantastic part of this scenario would be them caring what you say in the first place.

I'm aware of the difference. Part of my point was that naming a specific artstyle "modern" and then sneering when people use that word to mean contemporary is just being a condescending asshole and bad at communicating.

The other, bigger part was that no, you don't need to know the jargon to complain about things being forced on you. I'm not going to find lists of NYC buildings, but I think like 90% of that Pritzker prize list is ugly and about half of it is concrete boxes. It really doesn't matter to anyone living in/near them if these particular ugly concrete boxes don't count as brutalist according to architects.

Hah, this is so hidebound and backward I can't tell if you're being serious. I could just as easily say "click at the wrong time, you just deleted an hours work!" Get real.

The difference is that in one case there's a bright line distinction between when you are and are not using the interface. Which becomes more pronounced in real world use vs tech demos. A gesture system that's comfortable when it's what you're focusing on can become much less so when you're casually using it and want to combine it with eating, drinking or just scratching your nose without accidentally triggering a gesture.

I feel like putting all this together is mixing very different categories of errors.

There are errors more common with native speakers that stem from learning the language phonetically and unconsciously, without thinking about the logic or formal meaning of what you're saying, such as "should of", then/than mixups or "irregardless".

There are errors more common with ESL people that stem from English spelling and grammar being arbitrary nonsense. It is impossible to derive irregular verb forms of which there are many, and impossible to derive the spelling of a word from hearing it.

As a mix of both, many ESL people struggle with using the correct article because their language doesn't have an analogous concept of definite vs indefinite nouns.

And there are "errors" which are prescriptivist nonsense. By whose measure is "noone" not an acceptable compound but "someone" is? Why does the moronic norm that the comma and period at the end my second paragraph should be inside the quotation marks persist?

I also wonder if there just is a significant population of them who kind of suck at their job, the way many of ours do, but don't face many consequences because of the inherent difficulties of evaluation.

I like this analogy specifically because spies are famous for their insane fuckups due to lack of oversight and a conviction that their ends are more than important enough to justify their means.

Shit like MKUltra or the way multiple separate US agencies have financed and supplied various militias and cartels without any control over them are public knowledge, but by the very nature of spying there's probably 5 fuckups for every one that goes public.

And that's the big ticket items, a spy who just collects a steady paycheck while not gathering any useful info and/or sends back fictional info because that's way less risk is too common a WW2 story to even be notable.

But, like science, this doesn't mean that spying isn't a useful job, just good luck controlling it.

First off, you're massively overstating the unnaturalness of the mouse. It's already fully proportional movement pointing, with the added advantages of not obstructing your view with your own hand and being able to rest your hand on a supporting surface instead of having to flail around.

Second, you can already dictate essays, most people don't do that because it's a pain in the ass compared to typing. Drawing is either highly technical detail work which massively benefits from the exact precision inputs of a mouse, or largely about muscle memory which is why artists already draw on high precision pressure sensitive tablets, benefiting from practice in drawing on paper. You're proposing they remove the tablets for no real benefit.

Third, music editing, like programming, if fundamentally a fiddly task requiring talent, practice and an understanding of the field. Both already have a billion different approaches available, someone will probably hack something together for this and someone else will have it as their favorite editor. But on the margin it will turn out that no, in fact there wasn't a massive pool people who are naturals at programming or editing except for the pesky detail of having to learn one of the billion existing options.

We get sick much less often and die much later than ever. Hate is hard to measure but death by violence is also rarer than ever. Coveting and lusting are possibly as popular as ever but much less clearly bad. Theft and rape are both, again, rarer than ever. As in that previous post, you're just going on about how the lack of perfect solutions means everything is exactly the same as centuries ago.

I didn't make any claims about how strong the evidence is in any of the cases, just that it's there and newer than a fucking millennium. It also goes the other way, every religious person expressing a personal experience of miracle is also new evidence in favor of there being a god. I think the overall evidence is absurdly in favor of there not being one, but it's even more absurd to claim this question has stayed unchanged in centuries. And by simple statistics, for e.g. novel false religions to not be evidence against christianity, a lack of such would also have to not be evidence for it. Would you honestly not take "novel false religions stopped popping up after the spread of christianity" as evidence for it?

For the mind, we can insert electrodes into to the brain to make the housed consciousness go through various experiences. We can affect it much more strongly in predictable ways with various chemicals, for which we know which specific receptors they bind to. We have numerous studies of various forms of brain damage to see how they affect the conscious experience. None of these are things that would have made any sense a 1000 years ago. And yet you claim they don't impact our understanding of consciousness any because Disney hasn't invented mind control rays, again the insane binary worldview.

And again, from simple statistics, the only way these things aren't evidence against a soul is if their negation also wouldn't be evidence for a soul. If drugs or brain damage could affect your motor control but not your conscious experience, for example, you'd also have to not count that as evidence for a soul separate from the body.

Free will is either perfectly compatible - just because my brain is deterministic doesn't mean it doesn't go through a choice-making algorithm, which is what I'm experiencing - or it is currently unfalsifiable, requiring probably impossible cloning technology or time travel.

Lastly and most importantly, I don't want to hold you to previous generations of christians or care much for the many stupid views other materialists, past or present, hold. I fact quite the opposite, I want you to acknowledge that you are vastly different from the christians of a thousand years ago, and your belief system and worldview are different from theirs, because it has been informed by an entire millennium of new evidence. Materialism can be both a breakthrough and a ideology like many others, they're not really exclusive. But unlike yours it doesn't particularly view deforming its ideology as a bad thing.

I think meritocracy is a bad name for the concept, as it implies a particular system of organization. When really it's a metric you can apply to any system. And all systems are very much not the same in how meritocratic they are.

To structure society such that intelligence is privileged over every other human trait is to create a very dumb underclass, and to reduce the average intelligence of the working class as many of the smart kids are siphoned off to the middle classes. It also naturally creates a social division between those who meet the arbitrary and changing benchmarks for "education", and those who do not.

And this incongruous with the rest of your post, essentially "meritocracy isn't real, and it's bad that it's real". But yes, depriving lower classes of highly capable representatives is an inevitable consequence of meritocratic systems. I think that's a vastly preferable outcome to forcing these capable people into roles below their potential.

Good to see that the brazen fiction of a "paid by employer" part to hide your true tax burden is in some form alive everywhere. I realize it is too diffuse a problem to meaningfully lobby against, but it is fascinating to me that both the original post's 9M rubles and your 100k are abstractions inbetween the sum you cost the employer and the sum you receive.

Yes, this is actually incredibly useful. For instance even with a limited interface like Talon, I will map certain phrases or words I use frequently in my job to a keyboard shortcut, or a noise. This mapping means that I save probably ~5 minutes of work per day. Over time if we can map more of these things to even more minute/simple actions, we are looking at serious efficiency gains.

Not only is this something you can do right now on existing computers, it's much easier to do than with a noise/gesture system where the need for disambiguation makes custom definitions a much harder proposition.

Unless you're the sort of person who already has a bunch of autohotkey scripts for those tasks set up, you sure as hell aren't going to do that in a worse interface.

We still have ignorance, poverty, crime and war three centuries later, and in about the same amounts.

It's one thing to make arguments that the enlightenment doesn't deserve any credit for the industrial revolution, but this is straightforwardly false. We have vastly less of all of those things per capita.

Not only is is such a calculation possible, it is unavoidable. Most moral goods have a material cost, and basically no human picks the maximally moral side of that tradeoff. Though it is the relative, not absolute cost that matters for people, which is why the world is getting so much better. The marginal cost of saving a human life anywhere on Earth is about 5k these days, which is orders of magnitude higher than when people starting tracking things like this, as modern abundance has allowed people to actually work at saving lives on a massive scale, picking all the low-hanging fruit.

And in general your framing of the examples is exactly backwards. Increased wealth is what allows for the luxury of moral good. For example, evidently the cost of abolishing slavery (including serfdom) is too big for a pre-industrial society.

If.

It mostly strikes me as incoherent, no number of d20s can implement computation and self_made_human's output is easily distinguishable from random strings.

The first part is a subjective view. I could say that most parents expecting to see all their children to reach adulthood is a meaningful change to the human condition that's less than a century old, or that steroids trade health for fitness in a way the two couldn't be meaningfully opposed in Socrates' time, but we'll likely never agree on what counts as meaningful. I think merely massively improving the human condition counts for a lot. You do also mix this with measurable aspects, as in the linked post, where you are just factually wrong.

I could argue that coveting built most of our modern prosperity and lust is why most of us are even around, by looking at the evolutionary basis for these feelings. That they have negative effects as well doesn't say what the total sign is.

I continue to not care about what outlandish predictions past materialists made which failed to materialize. It is not an ideology that requires unity of thought among its adherents, I can evaluate individual predictions under my own materialist framework. No form of materialism I respect has promised mind control rays. It's still unclear to me what your problem with the quoted original passage is. The importance is obviously subjective, but evolution by natural selection very much explains the cause of human impulses in a way unavailable before.

I agree that framing Bayesian priors in exact percentages is generally disingenuous, but that doesn't make the entire approach so. Enough people I know and respect claiming miracles would make me significantly question my understanding of the world. A non-negligible part of the internet ratsphere turning catholic mildly made me do so, my understanding of internet rationalists a lot more than my understanding of religion though.

I have no idea how decapitation is supposed to show that the brain is the seat of consciousness over the heart, for example. Trepanation being used for mental illnesses is a much better example. But I do think there's a significant difference between using alcohol as a black box and knowing it's one of these https://en.wikipedia.org/wiki/GABAA_receptor_positive_allosteric_modulator, knowing the chemical actions by which it inhibits neuron activity and knowing that your altered conscious state is caused by inhibited neuron activity. Yes, going from this to explaining how being drunk makes you feel is a very long way, but I think it's a step in the right direction and I still don't care if some people claimed they could make the entire journey in a couple of decades or whatever.

Plenty of people on this board often complain about the deleterious effect of the superstimulus that is modern entertainment, so presumably they would vehemently disagree with your assessment. I think modern disney is mostly a massive tangle of principal-agent problems.

For the soul, I mean a metaphysical answer to the hard problem. I think most religious peoples conception of a soul fits this description. So zero drugs affecting your conscious experience would be evidence for a soul. As is, many drugs do affect it, so it does in fact have physical properties and interactions.

Yes, compatibilism is equally currently unfalsifiable as any test would have to prove one version of free will over the other. I'm just saying it's a perfectly coherent model of why I experience free will, so this experience in no way contradicts materialism.

I guess it'd be more accurate to say that I'm not assuming it's incorrect, I'm conjecturing it. So prepend every bit of communication ever (for Boltzmann brains also include thoughts as communication between different mindstates across time) with "Conditional on solipsism being false,". This doesn't actually say anything about the accuracy of solipsism.

Though, for models of solipsism weaker than Boltzmann brains it's not in the least clear that Occam's Razor even prefers them. Conditional on thoughts being real and coherent across time, there actually being an inductive external reality is the simplest explanation for me experiencing one, as the vast majority of possible mindstates would not feature such experience.

I can just paste my reply to the previous post here I saw that used solipsism to defend religion: either both me and you are actual minds existing in an external reality where induction works, or the very concept of communication is nonsense. So you can presume that every piece of communication ever starts with that assumption and go from there.

To actually require that clarification in front of every single statement made by everyone is meaningless pedantry in the same category as requiring every subjective statement to start with "in my opinion".

It doesn't actually argue this since it doesn't specify any of these things, except Da Vinci's understanding of figure and form. Which I think is matched by millions of art students worldwide who've practiced figure drawing. The internet is flooded with artists of absolutely astounding technical skill by historical standards and no one cares.

He doesn't give any criteria to judge the various categories of writers by. Or even give a category for Turing and Von Nuemann. As computer scientists, they knew far less than any halfway competent CS student these days. And these students do not learn from their original writings because other people have since found better ways to formulate their results. As pioneers in a nascent field of science, who or what field are you comparing them to and finding the modern analogues wanting?

Many philosophers have identified religion has giving rise to science in the first place. Because at the most basic, fundamental level, believe in natural science assumes a priori that that reality is ordered and knowable, a proposition one must take on faith.

As with all sorts of similar solipsistic arguments, my response is this: either both me and you are actual minds existing in an external reality where induction works, or the very concept of communication is nonsense. So you can presume that every piece of communication ever starts with that assumption and go from there.

This part is confusing two entirely separate things:

The last I checked, the distance between the equator and the north pole doesn't have any reasonable relationship to my everyday life, why should I expect units of time to?

One is the need for an independently verifiable definition of your measures, these days generally based on fundamental physical constants. Instead of building your system on a prototypical example and then accumulating measurement errors outward from it. Every system needs this, and in fact your current imperial units are defined as fractions of SI units, piggybacking on the definitions work of metric.

The other is the scale of the default unit, which is completely independent from your method of definition. After deciding to base the meter on the earth's circumference the actual fraction can still be freely chosen. The meter was picked specifically as a length useful in everyday life, it's pretty much the same scale as a yard.

Again with this shit. Because humanity hasn't solved all its problems and answered all questions, it has actually stagnated for centuries. Millennia!

Natural selection is very much evidence against god that didn't exist a 1000 years ago. People used the inexplicable miracle of life as evidence for god right up until it was explicable. Of course an implication directly leads to its contrapositive, not the negation, but I'd say the negation is usually implied in a Bayesian sense. Of course, Bayes himself is a lot more recent than a 1000 years.

Every aspect of the mind that gets explained and controlled by physics and chemistry is evidence against the existence of a soul. As people learn to measure and control your every impulse and emotion by manipulating your brain, you'll continue to shift the goalposts as long as they haven't solved the hard problem. (Which religion doesn't either of course. One the most beautiful aspects of materialism is that "I don't know" is an acceptable answer where religion pretends to knowledge it doesn't have or goes for "it is unknowable", a statement with an impossible burden of proof that has been shown wrong on innumerable topics time over time.)

Edit: and mormonism and scientology among others are new evidence against Jesus being the son of god. Any new cult with nonsense supernatural claims taken just as seriously as the old ones is evidence against the old ones being true by giving more data on the patters of how such beliefs form.

I'd say that the usual theodicy formulation is extremely overstated, and in fact the observed universe is incompatible with a god that's just kinda potent, benevolent and scient. It is very easy to imagine limited supernatural powers falling well short of omni-anything which could vastly improve the world.

Unsurprisingly, just about the only sensible theodicy I've seen is Scott's Answer to Job, and that's a creative writing exercise, not an attempt to explain the world. Though he does now have a link to actual apologetics using this line of reasoning.

That, and because there's a bank of railguns running along the underside of the fuselage, and the rear area is taken up by life support and capacitors. Also, because I thought it looked cool! Thinking about it, though, there's no reason not to go glassless and just bury the pilot down in the guts of the craft. They'd be safer there as well...

The "hard(er) sci-fi writer explaining away rule-of-cool art" answer is that this is already the case and the dome at the front is a sensor array.

Basic information theory would suffice, unless you want me to demonstrate the concepts of meaning or the validity of induction. In which case you've retreated from your original point to the standard 'treating solipsism as a gotcha against materialists' position. This has come up so often on this board, I should come up with a catchy enough formulation to make it my flair: either any communication happens between real minds existing in an inductive external reality (including thoughts as communication across time) or the concept of communication is nonsense. So prepend any communication ever with "Conditional on solipsism being false,".