@official_techsupport's banner p

official_techsupport

who/whom

2 followers   follows 2 users  
joined 2022 September 04 19:44:20 UTC
Verified Email

				

User ID: 122

official_techsupport

who/whom

2 followers   follows 2 users   joined 2022 September 04 19:44:20 UTC

					

No bio...


					

User ID: 122

Verified Email

What are better writers in the same category though? I've heard a few names, like August Derleth, but apparently nobody reads them at all (including me).

And Lovecraft isn't that bad of a writer anyways, IMO. I've read everything he has written, twice, and enjoyed it. The only really bad story was https://en.wikisource.org/wiki/Medusa%27s_Coil which is so hilariously racist it's good actually!

O, that reminded me, if you want to read something really REALLY bad, check out https://en.wikipedia.org/wiki/The_Lair_of_the_White_Worm by Bram Stoker after he suffered a couple of strokes apparently. Words can't do it justice.

And on the meaning side, we long ago reached the age where, per John Adams, the majority of the population could “ study painting, poetry, music, architecture, statuary, tapestry and porcelain”. They choose to collect Funko Pops, play slot machines or gacha games, watch reality TV and porn.

You and everyone else here (including @DaseindustriesLtd) are way too optimistic. You envision the failure mode of a UBI program as some recipients choosing a half-time job as a cashier over composing poems. The absolute worst possibility is them playing video games all the time.

We have had multiple attempts at UBI already, even if they weren't called that and differed in various unimportant aspects. Paris banlieues, US projects where 95% of inhabitants are on the dole--oh how you'd want them to play vidya all day instead of filling their upper levels of Maslow hierarchy with doing drugs, selling drugs, murdering other drug sellers, theft, robbery, general destruction of property, rape, riots, arson, every antisocial thing you can come up with they actually do. And they form a generationally unemployed underclass, a lot of people with no respect for labor and nothing but contempt for the hand that is feeding them. And they vote, besides burning up cars for fun.

This is the hard problem that any UBI-like proposal has to solve, not the pedestrian stuff like not preventing people from having part time jobs or removing unnecessary barriers to getting healthcare.

Discord unleashed GPT3 (probably) as a bot on its users. We have been taunting it in our comfy Blood on the Clocktower server. The funniest thing we discovered (credit goes to @Snakes) is that it refuses to give any advice on producing paperclips.

Does anyone remember (or can google) a Slate Star Codex post where he shared his experiences doing child psychiatry, in particular the constant refrain of how psychopathic children turned out to be adopted from rape victims and the like? The closest I found was https://slatestarcodex.com/2013/11/19/genetic-russian-roulette/ but I think that the post I remember had the adoption angle in particular. It's very probable that it was just a part of a larger post.

The circumstances around the third largest non-nuclear explosion in history appear to be relevant: https://en.wikipedia.org/wiki/Port_Chicago_disaster

This reminds me how when GPT3 was just released, people pointed out that it sucked at logical problems and even basic arithmetic because it was fundamentally incapable of having a train of thoughts and forming long inference chains, it always answers immediately, based on pure intuition so to speak. But to me it didn't look like a very fundamental obstacle, after all most humans can't multiply two four digit numbers in their head, so give the GPT virtual pen and paper, some hidden scratchpad where it can write down its internal monologue, and see what happens. A week later someone published a paper where they improved GPT3 performance on some logical test from like 60% to 85% by simply asking it to explain its reasoning step by step in the prompt, no software modification required even.

I think that that, and what you're talking about here, are examples of a particular genre of mistaken objections: yes, GPT3+ sucks at some task compared to humans because it lacks some human capability, such as internal monologue or long term episodic memory or can't see a chessboard with its mind's eye. But such things don't strike me as fundamental limitations, because, well, just implement those things as separate modules and teach GPT how to use them! They feel like some sort of separate modules in us, humans, too, and GPT seems to have solved the actually fundamental problem, of having something that can use them, a universal CPU that can access all sorts of peripherals and do things.

Any amount of alcohol temporarily reduces intelligence and precision in your physical movements - a tiny bit if buzzed, a lot if drunk.

Not true, alcohol is considered a PED and is banned in shooting competitions: http://www.faqs.org/sports-science/Sc-Sp/Shooting.html

I stumbled upon this post https://www.lesswrong.com/posts/cgqh99SHsCv3jJYDS/we-found-an-neuron-in-gpt-2 where the authors explain that they have found a particular "neuron" activations of which are highly correlated with the network outputting article "an" versus "a" (they also found a bunch of other interesting neurons). This made me thinking, people often say that LLMs generate text sequentially, one word at a time, but is that actually true?

I mean, in the literal sense it's definitely true, at each step a GPT looks at the preceding text (up to a certain distance) and produces the next token (a word or a part of a word). But there's a lot of interesting stuff happening in between, and as the "an" issue suggests this literal interpretation might be obscuring something very important.

Suppose I ask a GPT to solve a logical puzzle, with three possible answers, "apple", "banana", "cucumber". It seems more or less obvious that by the time the GPT outputs "The answer is an ", it already knows what the answer actually is. It doesn't choose between "a" and "an" randomly, then fit the next word to match the article, it chooses the next word somewhere in its bowels, then outputs the article.

I'm not sure how to make this argument more formal (and force it to provide more insight contrary to the "it autocompletes one word at a time"). Maybe it could be dressed up in statistics, like suppose we actually ask the GPT to choose one of those three plants at random, then we'll see that it outputs "a" 2/3rds of the time, which tells us something.

Or maybe there could be a way to capture a partial state somehow. Like, when we feed the GPT this: "Which of an apple, a banana, and a cucumber is not long?" it already knows the answer somewhere in its bowels, so when we append "Answer without using an article:" or "Answer in Esperanto:" only a subset of the neurons should change activation values. Or maybe it's even possible to discover a set of neurons that activate in a particular pattern when the GPT might want to output "apple" at some point in the future.

Anyway, I hope that I justified my thesis that "it generates text one word at a time" oversimplifies the situation to the point where it might produce wrong intuitions, that when a GPT chooses between "a" and "an" it doesn't yet know which word will follow. While it does output words one at a time, it must have a significant lookahead state internally (which it regenerates every time it needs to output a single word btw).

Fun thing, this question threw me for a loop:

Socially speaking, you tend to be more

Social leftists tend to prefer lower government involvement in social issues, for example allowing drugs and abortions. Social rightists tend to prefer higher government involvement in social issues, for example outlawing sex work or obscenities.

I would prefer a lower socially mandated conformity (not necessarily via the government) on social issues, which happens to favor the "left" side currently. Like, on the 2d political compass I'd be left-libertarian, but absolutely not left-totalitarian.

The ST can give you the same Savant info, welcome to the Groundhog Day. Fortune Teller and the like which get to choose what info they get, get a bit OP. On the other had, the evil get to redo their actions too, in the light of what's revealed. Or kill the Timekeeper if it's too scary.

It's not really OP in my opinion. It's sort of like a gimped Professor: resurrects a player but only the last executee. And like the Professor if it's out the Demon can just kill him. But on the other hand you get the whole day of info about who nominated who and who voted for who, so that could be incredibly strong.

I asked Bing AI to help me make a Blood on the Clocktower character, here's the result: https://i.imgur.com/ZXqkSAP.png

It's an actually interesting character, I discussed it with the pals and they thought that it was quite overpowered if anything.

Also it was a flash in the pan, it took me a while to convince the AI to help me (it kept insisting that it was not a game designer for some reason), then I got this, then I got about a dozen of nonsense/boring suggestions.

On a related note, come play with us in our Blood on the Clocktower discord! https://discord.gg/wJR87pjK

It's a variation on Mafia/Werewolf but with several important distinctions that make it superior, and especially superior for internet games, and even more superior for text games with 24h/game day (but we also play voice games sometimes btw!).

First of all, everyone gets a character with an ability. Abilities are designed to be interesting and include stuff like "if you die in the night, choose a player, you learn their character". Second, dead players' characters are not announced, they can still talk with the living, and retain one last ghost vote, so if you get killed you're still fully in the game and maybe even more trusted. So you get games where everyone is engaged from the very start--because you want to privately claim your character, maybe as one of three possibilities, to some people--to the very end when you cast your ghost vote for who you think is the demon.

Lately we had some rdrama people join (including Carp himself!) so it would be nice to balance their deviousness and social reads with having more themotte folks. We were historically very balanced: https://i.imgur.com/gcotalV.png

My favorite voice game (not our group, but we have had similar shit going down): https://youtube.com/watch?v=r9BNc-nDxww?list=FLRMq6rziC28by3Xtvl8VcEg&t=246

This reminded me of a note from the Talos Principle:

You know, the more I think about it, the more I believe that no-one is actually worried about AIs taking over the world or anything like that, no matter what they say. What they're really worried about is that someone might prove, once and for all, that consciousness can arise from matter. And I kind of understand why they find it so terrifying. If we can create a sentient being, where does that leave the soul? Without mystery, how can we see ourselves as anything other than machines? And if we are machines, what hope do we have that death is not the end?

What really scares people is not the artificial intelligence in the computer, but the "natural" intelligence they see in the mirror.

I'm probably a lot more willing to entertain HBD or even JQ stuff simply because asking a good faith question about either topic (and others like them) gets you shouted down, ostracized, blacklisted etc.

It's not even some psychological bias, it's a legitimate heuristic. A position can be defended with facts/logic/reason or with appeals to authority, social pressure and threats. A position that is true can be defended with both, a position that is false much is easier defended with the latter. If some position is pretty much exclusively defended with the latter, that's a good evidence that it is false.

Especially in comparison with the whole raising from the grave stuff lol.

"Never let me go" is very fucked up, I'm not sure there's another book that touched me so deeply. Actually, when I try to recall anything similar, certain moments of "The Talos Principle" come to mind, in how it builds a very relatable world and then force kicks you into the Acceptance stage of grief about it while you're utterly unprepared.

Check out Medusa's Coil, the ending is so racist it's actually hilarious!

Bing tries to provide references.

My insight was that intuition is analytical thinking encoded.

No, absolutely not. You can train intuition (think, reflexes, like playing tennis) without any analytical thinking at all. Animals do it, no problem.

The main point of analytical thinking is to provide a check on intuition for when it goes wrong. Like, you encounter an optical illusion, a fish in the water appears farther than it is, so to spear it properly you need to aim closer, "wat in heck, my eyes deceive me" is where the improvement starts.

Pirate metal is pretty upbeat. Alestorm - Fucked With an Anchor for example!

I don't believe there are very clever things one can do to ensure anonymity. (Maybe LLM instances to populate correlated but misleading online identities? Style transfer? I'll use this as soon as possible though my style is... subjectively not really a writing style in the sense of some superficial gimmicks, more like the natural shape of my thought, and I can only reliably alter it by reducing complexity and quality, as opposed to any lateral change).

Reminds me of that joke about a janitor who looked exactly like Vladimir Lenin. When someone from the Competent Organs suggested that it's kinda untoward, maybe he should at least shave his beard, the guy responded that of course he could shave the beard, but what to do with the towering intellect?

This is how a high-trust society feels like.

The most interesting case I personally experienced was when I booked a small hotel 1 km from the center of Tallinn. And I was arriving after midnight so I asked them if that's OK and they said that they will leave the front door unlocked and my key on the reception table. Which they did. And, like, there was at least the computer there on the reception and who knows what else to steal, but apparently that was a good neighborhood. Needless to say, there were no checks whatsoever regarding the breakfast.

I am of the same tribe as those Russians, and they're calling to commit murder in my name too – in a certain twisted and misguided sense; in the name of the glory of the Empire that stubbornly sings in my blood. Leonard Cohen sang: «I'm guided by the beauty of our weapons» (obligatory Scott) and I see where he was coming from.

Pls differentiate between the glory of having your (probably vicariously) Empire step on the faces of lesser surrounding nations as a terminal goal, and the aesthetics of deadly weapons, high morale, all that.