site banner

Friday Fun Thread for November 7, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

3
Jump in the discussion.

No email address required.

Anyone else finding the new Kimi to be kind of overrated, at least by the standards of 'wow closed source is fucked' sentiment I see on twitter? I did a couple of creative writing challenges and found it significantly inferior to Sonnet which is perfectly reasonable given the price differential. I gave Sonnet an example of one of Scott's 'house party in San Francisco' and tell it to write a similar one, without plagiarizing the ideas from the first (which AIs seem to struggle with given that if you fill up the context length and tell it to draw inspiration from without plagiarizing they struggle). Sonnet could do that, Kimi didn't. Sonnet knows what a text adventure is and lets the user fill in the actions for the character, Kimi will make up its own actions. It's logical abilities were pretty good though, somewhere around Grok 4 and Sonnet.

Is this another coding-maxxed model? I gave it a little drawing with css test and it wasn't as good as Sonnet and much worse than Opus 4.1. In short I guess I don't really believe in the benchmark figures and I certainly don't believe in 'Artificial Analysis' which just aggregates benchmarks together. Kimi is cost-efficient and pretty good but not highly performant I think.

Opinions on Kimi Thinking generally?

Interesting turns of phrase and very good for atmosphere (or at least the descriptions are novel for now) but it gets details wrong and steers all over the place.

David Chapman has an interesting podcast out discussing the work of Jordan Peterson and how his book Meaningness relates. https://meaningness.substack.com/p/maps-of-meaningness

I don't like Chapman's casual dismissal of 'eternalism' or the idea of an ultimate Truth, but I do find him an engaging thinking. More thoughts here: https://x.com/Thomasdelvasto_/status/1986911832192229657?s=20

Much ink has been spilled here over the dreaded em-dash and other hallmarks of AI writing. But what other linguistic pet peeves do you have?

I ask because I just found myself fuming over the widespread confusion between "jealousy" and "envy." People tend to use them as synonyms (more often simply using jealousy for both terms), but the two words describe emotions that I think deserve to be distinguished. Jealousy is felt over things that rightfully belong to you, while envy is felt over things which do not. God is jealous; you are envious. Being jealous is still generally bad, but it's nowhere near as bad as envy. As a child who was bad at sharing but generally pretty good about being happy about the good fortune of others, it has always bothered me how few people seem to grasp the distinction.

-People are forgetting past perfective. You can find oodles of Youtube videos titled "What I wish I knew before I started (whatever undertaking)" and every one of them means "What I wish I had known."

-Fewer vs less. "I got less chances in that game" is not a thing. Makes you sound like a 5-year-old, right up there with "How much couches do you have."

And to all the cool aunt, "AKshually language evolves" descriptivists, this change entails a loss of possible meanings and is bad. I know "deer" used to mean "any animal" and "corn" used to mean "any grain," etc but when those words changed usage it became possible to express MORE thoughts because the language became more specific. My examples, and the examples that stodgy prescriptivists mostly complain about, all involve a blurring of meanings, which in 99% of cases entails blurring of thought (both as cause and then again as consequence). Do you feel like we have an excess of clear thought out there nowadays? Of course not! Do your part- join the prescriptivists. Make language specific again! SEIZE THE MEANS OF INFLECTION!!!!!!!!

One more: "Have a good rest of your day" is rampant in Canada and has almost completed replaced "Have a good day" among customer service workers under 30 years old. To wish anyone anything implies that you wish it for the future. Are they worried that I might think they're wishing that the past of my day, up to the point of our interaction, had gone (or more likely "went") well? What happened to these people?

People are forgetting past perfective. You can find oodles of Youtube videos titled "What I wish I knew before I started (whatever undertaking)" and every one of them means "What I wish I had known."

This is so funny to me. I remember being a 10 year old kid taking extra english lessons, getting those tenses drilled into my brain only then to move to America a few years later and never experience anyone use them outside of english class. I'm pretty certain 12 year old immigrant me was more knowledgeable about english grammar than some of the teachers.

My examples, and the examples that stodgy prescriptivists mostly complain about, all involve a blurring of meanings, which in 99% of cases entails blurring of thought (both as cause and then again as consequence).

What is the blurring of meaning in a sign at the supermarket saying "10 items or less?"

it became possible to express MORE thoughts because the language became more specific.

What thoughts is it possible to express now that "corn" refers to a specific new world crop rather than to all grains that were impossible to express before?

As a Canadian former customer service worker who has said that exact line, here's what's going on:

When you work one of those jobs, the set of polite greetings and goodbyes all reach semantic satiation. You've said "Have a nice day" a thousand times across a hundred shifts. The words are no longer communication. They're a button you press to process a customer, like the code to unlock the PoS terminal, or the lever to open the cash register. eye contact, fake smile, take card, tap card, print receipt, pass receipt, pass bags, eye contact, fake smile, "haffaaniceeddaaaaay", greet next customer. eye contact. fake smile..

and every so often, something shakes you out of this dissociative trance and you realize your limbs are working on autopilot like they're connected directly to the gears of capitalism, and you've been saying "haffaanicedaaaay" the last 63 transactions (more? you can't remember). With a jolt of existential horror, you scramble to just wrest control back and say something, anything else. "Have a" (oh no. you can already feel your tongue slipping back into the well worn groove) "..good rest of your day!". Sure, a little awkwardly phrased, but you hope they appreciate the fact that you composed it just for them. You give them a real smile, real eye contact. Did you do it right? Did you do a good customer service?

You probably did. Pat yourself on the back. That was a nice. Maybe you'll say it to the next customer..

Weary.

No, you're not tired, you're wary.

I only ever see people use it wrong in one direction, and it's infuriating.

They're probably thinking of leery.

Oh man I have several.

  • "I could care less"
  • "For all intensive purposes"
  • Misuse of "literally" to mean "figuratively"
  • Saying "an homage". My brother in Christ, the first sound in "homage" is an H, not a vowel. You should say "a homage". Technically this one is more the mispronunciation of "homage" than the grammar rule being used wrongly

All of those get under my skin quite a bit. I just ignore it because nobody likes a grammar Nazi to correct them, but they do annoy me.

Wait you pronounce the h in homage, brother? I never have. But I can remember many instances where I spoke a word that I had only ever seen written down and was immediately ridiculed.

If you don't, presumably you don't say "A honest man?"

I always hated when I'd ask someone the time and they'd say "quarter til," "half past nine," "ten before din'," "twenty before honey," "forty-six before the shits," etc. Just tell me the fucking time, damn it.

the first sound in "homage" is an H, not a vowel

No, it’s not. At least not if you’re American.

I'm an American, and yes it is.

I'm also American and it's not. I've literally never heard the H pronounced, including online, so it's not a regional thing.

There's a special place in hell reserved for people who write "could of" instead of "could have" and they should be sent there right now.

For the record it's not "rock, paper, scissors," it's "scissors, paper, rock." Whoever it was who duped the new generation to say it backwards should be caned.

In *guu, chokki paah" (janken, the Japanese version, used I sometimes believe to make every decision of import throughout Japanese history) the order is actually "rock (guu) scissors (chokki) , paper (pahh, or the outstretched hand)"

They should of been sent there a long time ago.

AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA!

You evil bastard!

Yeah, they're all total looser's.

For two hundred years people like you have been trying to squelch English speakers' opportunities to feel like fancy snooty French people once in a while. Well, you see the trajectories at the end of that graph? No more! Our time is now. We're not even saying "OM-idj" now, oh no, you lost that chance at compromise. Our speech will now be a full-throated "oh-MAHZH" to the romance languages!

Very well, the gauntlet has been thrown down. Let there only be enmity between us from this day forward!

Saying "an homage". My brother in Christ, the first sound in "homage" is an H, not a vowel. You should say "a homage". Technically this one is more the mispronunciation of "homage" that the grammar rule being used wrongly

You're mistaken. Those people are saying "an hommage".

I mean... maybe in some cases, but people do write "an homage" all the time. So either they are pronouncing "homage" wrong, or they are getting the grammar rule for a/an wrong.

It irks me a tiny little bit that literally everyone uses the hyphen-minus (-) rather than the actual hyphen (‐), which Unicode did expend the effort to disunify.

Because of its prevalence in legacy encodings, U+002D - HYPHEN-MINUS is the most common of the dash characters used to represent a hyphen. It has ambiguous semantic value and is rendered with an average width. U+2010 ‐ HYPHEN represents the hyphen as found in words such as “left-to-right”. It is rendered with a narrow width. When typesetting text, U+2010 HYPHEN is preferred over U+002D HYPHEN-MINUS.

The Arial font doesn't even have a hyphen character!

On the other hand, though, the two characters seem visually indistinguishable—far from the significant difference between hyphen-minus (-) and minus (−).

I was going to quip that I don't have two different keys on my board, but I absolutely do. I have both - and -, from the number line above the keyboard, and the numberpad on the right. Still, I don't see much difference. See if you can see anything:

-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|


-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|


-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|

They're both hypen minus, aren't they?

They're both hypen-minus, aren't they?

Yes.

The caveat here is that if you're using anything other than a system font (as you should be), the actual hyphen is going to be mapped to U+002D and U+2010 won't be used. In fact, the only two fonts I could find that preserve this distinction were Calibri (used here) and Times New Roman. U+2010 generally shouldn't be used as most word processing programs treat U+002D as the actual hyphen, which means that any time the program needs to recognize a hyphen for formatting purposes it will look for U+002D. And if you're have a justified right edge with hyphenation on, it's going to insert U+002D anyway, so if you're going to be a purist you'd better be prepared to hyphenate manually. Especially since the software will naturally break at the hyphen, which could theoretically result in two consecutive hyphens or a hyphen on either side of the break if you insist on using U+2010. The differentiation is a relic of early Unicode moving away from the old ASCII system, where, with only 127 characters available, you had to double up. But nobody uses fonts that were designed for ASCII anymore, and there's no reason to make a slightly beefier hyphen for use as a subtraction symbol. The distinction has been deprecated by modern technology.

I would also not that the same is more or less true for the actual subtraction symbol, though the proper substitute is not the hyphen-minus but the en dash (U+2013). I'm more of a purist about this one, but like U+2010, it's also available only in a limited number of system fonts. It generally rides a little lower than the en dash, and if they look similar enough and you're that particular you can sub in the subtraction symbol from Times New Roman (Calibri's is too rounded to match most fonts), but in some of the newer fonts that are a little more daring, like Signifier, it's best to just use the en dash.

The user wants me to reverse the word "ENTROPY" step by step. The word is: E N T R O P Y Reversed means: Y P O R T N E Let me do this step by step: Start with the original word: ENTROPY Reverse it letter by letter: The last letter is Y The second to last letter is P The third to last letter is O The fourth to last letter is R The fifth to last letter is T The sixth to last letter is N The seventh (first) letter is E So the reversed word is: YPOR TNE (wait, let me check that) Actually, let me list it more carefully: Original: E N T R O P Y Position: 1 2 3 4 5 6 7 Reversed order (position 7 to 1): Position 7: Y Position 6: P Position 5: O Position 4: R Position 3: T Position 2: N Position 1: E So the reversed word is: YPO R T N E (no spaces, just YPORTNE) Let me verify: E N T R O P Y Y P O R T N E Yes, that looks correct. The reversed word is "YPORTNE". Now I need to show this step by step as requested. Step-by-step reversal: Identify the original word: ENTROPY Count the letters: 7 letters Write the last letter first: Y Add the second-to-last letter: YP Add the third-to-last letter: YPO Add the fourth-to-last letter: YPOR Add the fifth-to-last letter: YPORT Add the sixth-to-last letter: YPORTN Add the seventh-to-last (first) letter: YPORTNE Final result: YPORTNE Here is the step-by-step reversal of ENTROPY: Original word: E N T R O P Y Step 1: Start with the last letter → Y Step 2: Add the 6th letter → YP Step 3: Add the 5th letter → YPO Step 4: Add the 4th letter → YPOR Step 5: Add the 3rd letter → YPORT Step 6: Add the 2nd letter → YPORTN Step 7: Add the 1st letter → YPORTNE Reversed word: YPORTNE

The only model I bothered to try, the new Kimi K2 Thinking, one-shotted this, using the prompt "Reverse the word ENTROPY step by step."

Ergo, LLMs aren't stochastic parrots. I'm glad we're all on the same page here.

Well, it's a variation of the goat fucker problem. You can be an upstanding citizen your whole life, but if you fuck one goat, you're still a goat fucker. Similarly, it doesn't matter how many complex problems you can correctly solve; if you say that "entropy" spelled backwards is "yporrrtney" even once (especially after a long and seemingly lucid chain of reasoning), it's going to invite accusations of stochastic parrotism.

Humans make mistakes too, all the time. But LLMs seem to make a class of mistakes that humans usually don't, which manifests as them going off the rails on what should be simple problems, even in the absence of external mitigating factors. The name that people have given to this phenomenon is "stochastic parrot". It would be fair for you to ask for a precise definition of what the different classes of mistakes are, how the rate of LLM mistakes differs from the expected rate of human mistakes, how accurate LLMs would have to be in order to earn the distinction of "Actually Thinking", etc. I can't provide quantitative answers to these questions. I simply think that there's an obvious pattern here that requires some sort of explanation, or at least a name.

Another way of looking at it in more quantifiable terms: intuitively, you would expect that any human with the amount of software engineering knowledge that the current best LLMs have, and who could produce the amount of working code that they do in the amount of time that they do, should be able to easily do the job of any software engineer in the world. But today's LLMs can't perform the job of any software engineer in the world. We need some way of explaining this fact. One way of explaining it is that humans are "generally intelligent", while LLMs are "stochastic parrots". You're free to offer an alternative explanation. But it's still a fact in need of an explanation.

Of course this all comes with the caveats that I don't know what model the OP used, a new model could come out tomorrow that solves all these issues, etc.

what model the OP used

I'm >80% confident that OP didn't use an LLM, and this is an attempt by the Mk 1 human brain at parody.

(Since I'm arguing in good faith here, I won't make the obvious connection to n>1 goatfucking)

The version of the stochastic parrot you describe here is heavily sanewashed.

In the original 2021 paper On the Dangers of Stochastic Parrots, Bender et al. use “stochastic parrot” as a metaphor for large language models that:

  • are trained only to predict the next token from previous tokens (string prediction),

  • stitch together word sequences based on learned probabilities from their training data,

  • do this without any reference to meaning, communicative intent, or a model of the world or the reader

The first two points? They're just how LLMs work. The third is utter nonsense.

We know that LLMs have world-models, including models of the reader. In some aspects, like "truesight", they're outright superhuman.

Of course, even Bender's version isn't the same as the more pernicious form polluting memeplexes, that is closer to:

People saying “it’s just a stochastic parrot” to mean “this is literally just a fancy phone keyboard, nothing more,” full stop.

Or, a claim they can't reason at all. This ignores that even a pure next-token predictor trained at scale develops nontrivial internal representations and systematic behavior, whether or not you want to call that “understanding.” once again, there's real structure in there, and things that, if you aren't allowed to call world models, I have no idea what counts.

What I find the most annoying is the form that can be summed up as: "by definition any next-token predictor cannot understand, so anything it does is parroting.”

That is smuggled in as a definitional move, rather than argued from empirical behavior or cognitive theory.

If you look closely, none of these objections can even in principle be surmounted by addressing the issues you raise.

LLMs stop making mistakes at higher rates than humans? Nope.

They stop making "typical" LLM mistakes? Nope.

The Original Sin remains. Nothing else can matter.

Another way of looking at it in more quantifiable terms: intuitively, you would expect that any human with the amount of software engineering knowledge that the current best LLMs have, and who could produce the amount of working code that they do in the amount of time that they do, should be able to easily do the job of any software engineer in the world. But today's LLMs can't perform the job of any software engineer in the world. We need some way of explaining this fact. One way of explaining it is that humans are "generally intelligent", while LLMs are "stochastic parrots". You're free to offer an alternative explanation. But it's still a fact in need of an explanation.

Just because some words/concepts are fuzzily defined isn't a free pass to define them as we please. The "stochastic parrot" framing is nigh useless, in the sense that it is terrible at predicting, both a priori and posteriori, the specific strengths and weaknesses of LLMs vs humans. All powerful systems have characteristic failure modes. Humans have aphasias, change blindness, confabulation, motivated reasoning, extremely context dependent IQ, and so on. We allow this, without (generally) denying the generality of human intelligence. I extend the same courtesy to LLMs, while avoiding sweeping philosophical claims.

Once again, I can only stress that your definition is far more constrained than the norm. Using the same phrase only invites confusion.

Also illustrative is the fact that OP (very likely) didn't use an LLM to produce that. Because LLMs from the past year generally (or near certainly for SOTA) wouldn't do that. It's nothing more than a shibboleth.

The first two points? They're just how LLMs work. The third is utter nonsense.

The first point is not how any production LLM has been trained for years now. Post training is not next token prediction.

Video game thread

I'm still playing BG3. Around 17 hours in. Progress is kinda slow, not because it's really difficult or boring but because there are so many items to inspect, notes and books to read, traps to spot and disarm, morals to ponder, battle decisions and build decisions to make. I'm enjoying it though. I killed one of the goblin leaders before heading downwards. I'm doing lots of stuff in the Underdark. Picked up a sword that can sing, and killed a bunch of minotaurs and duergar dwarves.

I'm playing Sins of a Solar Empire 2, still.

In my general opinion it is shaping up to be a masterpiece of the 4x/RTS genre.

3 different factions, each with two subfactions. Each faction has different specialties and the sub factions tend to be focused on either aggression or defensive strategy. So you have ample options for choosing your preferred playstyle for a given match.

Each Faction/Subfaction has an array of ship types and a decent selection of capital ships, and a dizzying number of techs to research to boost those ships' performance. And each faction has very different strengths and weaknesses when it comes to economy.

And the devs are set to release a new fourth faction, as well as the LONG-anticipated campaign mode, which will finally answer one of the core questions of the lore from the original game.

Finally, the true core combat mechanic being battles between "Fleets", and the fact that EVERY projectile a ship fires is actually simulated in 3D space, and some surprisingly complex damage calculation means there's some extra strategic depth in which ships you've chosen to compose your fleet(s) and which techs you've chosen to optimize their performance.

This means its not quite a "Rock-Paper-Shotgun-Laser-Nuke" situation where every attack has a direct counter and you just keep leveling your units until you win. It is possible for a giant deathball fleet to lose to a smaller force if the smaller force is optimized precisely enough to defend against the ship types its facing. And there's several mechanics to allow you to quickly augment your fleet's strength at opportune moments.

The upshot is that the outcome of battles can be relatively unpredictable, and you do NOT need a higher APM to micromanage your way to victory if you are successful at scouting out the opponent and predicting and countering their strategy. Although high APM helps. And in any situation with 3 or more players, the exact mix of factions and ships being thrown around can force a complete mid-match re-evaluation of said strategy. Finally crushing the guy who was pumping out dozens of cheap ships to harass you feels great until the third guy rolls up with a wall of heavy cruisers backed by support ships to start wrecking your infrastructure.

My one main fault with it is at present is the unwieldy and un-intuitive state of tech tree which makes it hard to learn for new players and kind of 'forces' a certain playstyle on you until you can get enough research to unlock the techs you actually want/need.

Yet the variable scale of the game means you can play a quick 30 minute-1 hour match where the later techs aren't even needed, or you can do a 6+ hour epic with hundreds of planets and multiple star systems that ends with planet-killer railguns, Hundreds of ships duking it out at once and beastly Titan warships that can delete whole fleets in short order.

Anyway, its a very fun game, and I'd host some sessions for Mottizens who would be interested. Its sadly not as popular as it truly deserves.

I'm glad others are playing! Sins 1 was a masterpiece, and my basic verdict on Sins 2 has been "it's more of the same and that's perfect". The only change I don't like in Sins 2 is the removal of the pirates mechanic - while they still exist, I just don't find the current form as fun as the way they worked in the first game. Otherwise it's the perfect sequel in my eyes.

I'm curious, do you have any good guides on the strategic considerations of fleet composition? It's unlikely to be necessary for me (as I only play AI matches and don't touch MP), but I'd be interested to learn more about the game. My fleets tend to be a random mishmash of ships without any real deep strategic consideration behind it, so I'm sure I have a lot to learn.

Man, I'm still struggling with optimal fleet composition for TEC myself.

You can delve into like full-on spreadsheet mania with it, but I genuinely think the number of possible combinations ultimately makes it impossible to really calculate once the game hits a certain size.

One reason I like TEC is that by midgame if your economy is running well, you can spit out whatever ships are needed to deal with the current threat very quickly, so you're replacing lost ships and optimizing your composition on the fly. "Oh shit that's a lot of strikecraft, better send some Flak Frigates in."

You want your fleet's pierce to be able to overwhelm their fleet's durability. Here's the basic rundown You can sort of kind of ignore the "supply" number if you can tell at a glance that the ships they've sent in don't have the requisite pierce to focus down your ships' health given your ships' durability. That is, even if you were to start losing, you can likely retreat and not take too many losses since their effective DPS is low.

If they've got a lot of durable ships in their fleet, you gotta bring as much pierce as possible.

If there's any stats in the game worth memorizing, its the durability rating of each ship. I mentally have them sorted into buckets of "High, Medium, Low" durability so I don't have to do actual math in my head.

So I'll share my basic approach.

I like to have a wall of higher durability ships as the 'core' of my fleet. I tend to rely on Carriers in the early game, which is to say I have them sit back and send strikecraft in to do the dirty work, so I just want to have a physical shield to keep the enemies at bay.

Then I have to make some decisions, based on what it appears the enemies are fielding. If I'm dealing with strikecraft, the aforementioned flak frigates. If they've got high DPS capital ships, I will probably produce a TON of Corvettes since those help keep the Caps occupied and not killing my more valuable ships (note: doesn't work as well on human players). If they're fielding tough ships with a lot of support: Missiles. Lots of missiles.

Then pick your own caps based on whether you're being more aggressive or defensive. Or, if you like, if you're focusing on killing as much as you can as fast as you can, or if you need survivability (i.e. you're sending a fleet deep into enemy territory and it needs a lot of repair capabilities).

Then add in ship items for your capital ships based on what the enemy is likely to throw at them.


The one big 'insight' I've had that I THINK was fully intended by the Devs was that they have made the default supply cap pretty strict to prevent overuse of "Ball of doom" fleets that can just overwhelm anything, and require harder decisions about where to send your forces, knowing that you also can't hold a lot in reserve.

But in exchange, they've added numerous ways to augment fleet power that doesn't hit the supply cap. Like using influence points to call in NPC factions on your side, or the TEC Enclave's ridiculous(ly fun) garrison system.

So its actually kinda smart to divide up your forces between more than one fleet, and keep them mobile, so you don't have all your valuable supply caught in the wrong spot at the wrong time. And if you notice your opponent has a singular large fleet, you can both prepare to face it by setting up heavy defenses in bottleneck areas, or you can try to harass behind their lines and force them to keep said large fleet on the defensive. Calling in pirate raids on their planets basically demands they send a large force to counter it. Pirate raids are pretty damned expensive in influence, however, so timing is important.

So I think the Devs want players to try different tactics than "make the biggest fleet and dive at the enemy's homeworld."

I've been experimenting with setting up two fleets early on. "Hammer" fleet and "Anvil" fleet.

Anvil is made up of the high durability ships, and is intended to be the first one that encounters the enemy, and is able to stand there and slug it out for long enough for Hammer Fleet to arrive, which is the high DPS, high pierce fleet that can start whittling them down faster, HOPEFULLY while they're distracted with Anvil fleet.

If we get overwhelmed, I can order Hammer to retreat while Anvil covers for it. If we start winning, I can push Anvil forward to take more territory/cut off retreat while Hammer finishes the job.

It's been interesting to keep things managed this way. It feels like this more flexible approach is rewarded so I do think I've uncovered aspects of the game's design that the Devs intentionally added but didn't call attention to directly.

But its still great fun to build up a fleet as large as you can make it, built around what you expect the enemy to field, then smashing large fleets into each other and seeing what happens.

DEFINITELY learn how to get your ships to focus fire on high-value targets, though. They tend to do sub-optimal targeting on their own.

This has my interest. I heard about SoaSE from a friend, I saw #2 come through last year, but $50 is pretty steep for a game you say "still" about.

Playing Sins of a Solar Empire is "still," playing #2 is just regular playing a game that is 15 months old. I eat food that is past its printed date by more than 15 months. I still play FTL, and that's 13 years old.

you can play a quick 30 minute-1 hour match where the later techs aren't even needed, or you can do a 6+ hour epic

This seems like a major flaw if I can't tell which one I'm getting into ahead of time. Although I suppose the answer is you sign up for a 6 hour epic, and sometimes it ends quick. A twelvefold difference in time is extreme.

If you do host, I'll try to play.

I mean, I played the first game in the series for over 10 years.

When I say that the Second has improved on the first in almost every conceivable way, I want to establish that it had a high bar to clear.

This seems like a major flaw if I can't tell which one I'm getting into ahead of time.

Generally you can tell from the game settings at the outset. The Size of the map is the primary determinant as to how quickly you'll come into contact with the opponent, and whether there's even enough resources to build an economy or if you just hop straight to fighting.

And you can set the game speed higher for ship movement, tech research, and resource accumulation to ensure things end quickly, or lower those speeds to stretch the game out and force a more strategic match.

The largest maps start to feel like playing Stellaris but with just the space battles and economics and less of the tiddly empire management.

And there is a contingent of players who seem to not really want to play competitively at all but instead just set up the largest fleet battles possible then just sit back and watch them play out.

As mentioned there's a steepish learning curve for the tech tree alone, knowing what to research and when is a critical factor and the game will NOT hold your hand to show you which path is ideal.

So it is a bit much to ask of someone who isn't familiar with it to start playing with you right off rip.

Soccer management game Football Manager 26 released this week after two years of anticipation.

No installment was released last year following complications from the transition to Unity; the two releases before that (2023 and 2024) were announced as half-developed games because of parent studio Sports Interactive's purported all-in focus on this year's release (well, last year's, as it was then). FM26 was to be the ultimate Football Manager: enhanced match graphics, a tile-based UI no longer evocative of a spreadsheet, improved "newgen" (game-generated future player) faces, and... women's football.

Then came the leaks and reluctant announcements from the studio as the clock ticked down to what should have been the release date of FM25. Despite years of insistence that neither the engine transfer nor the addition of women's football would cause any complications, the game was in trouble. International management, a poorly developed (and therefore rarely touched) aspect of previous games, had been entirely removed rather than improved. In-game manager-player interactions (known as "shouts") had been entirely removed rather than improved. Most controversially, player weights had been removed for obtuse reasons pertaining to "women's body types" being "very different from men's" with their weight fluctuating "a lot more, often weekly." This, of course, somehow resulted in all players having their weight measurements removed, including male players.

Cue this week's release... a calamitous, bug-filled, poorly-optimized catastrophe. Sure, the bedrock is there in Unity for a game that will eventually surpass its predecessors, and patches over the last 48 hours have taken Steam reviews from "Overwhelmingly Negative" to "Mostly Negative", but it's simply unclear what the SI team was working on for the last five years of claimed development on this game. User mods slapped together in a week's time have outdone in-game graphics and processing times; the two most recent patches included hundreds of fundamental basic features and fixes that... somehow no one thought to include in the base game upon release? The whole saga has been a fascinating public showcase of mismanagement, procrastination, incompetence, and a bizarre hierarchy of priorities.

That last component is most interesting to me as an observer: who is benefitting from all these video games devoting time and money toward the implementation of women's sports? EA Sports, 2K Sports, and now Sports Interactive chose to limit development elsewhere so they could include slapdash, poorly-planned women's leagues. Are their marketing departments manufacturing idealistic projections of future female fanbases? Have they all been Pied-Piper'd (or Don-Corleone'd) by Sweet Baby Inc.?

EU5 has been released. I'm getting too old for grand strategy games, especially when they run as slow as this one. I played as Muscovy and got to 1390 in two evenings. That's three extra hours of intense gameplay after nine hours of my regular job each day. I need a better CPU at the very least.

Playing Dispatch. It's by the Critical Role people, has nice animation, music, writing, etc. The bulk of the time playing is selecting dialogue options and watching your character make it sound snappier than you ever could pull off. There are real time events you can turn off, and otherwise two different games - resource management/hero leveling and a "hacking" puzzle game.

Unspoilerly Plot - It's a superhero setting. You are someone without powers but who has extensive experience around heroes and villains. You get a gig as a Superhero Dispatcher (think 911 dispatcher for subscribers to a corporate super-hero service.) You basically become the life coach for this universe's version of the Suicide Squad. Shenanigans ensue.

It's fun. My one complaint is that I wish there was an option to just do the Resource Management game without watching all the unskippable cut scenes. You can make different choices which makes replaying the game less tedious, but it's still tedious.

All the cutscene stuff looks good. You find the actual gameplay fun too?

Unspoilerly Plot - It's a superhero setting. You are someone without powers but who has extensive experience around heroes and villains.

Ever read Steelheart, or its sequels, by Sanderson? That's my canonical no-power-superhero story.

Featuring an all-star cast from every corner of entertainment
Aaron Paul (Breaking Bad, Westworld, Black Mirror)
Laura Bailey (The Legend of Vox Machina, The Last of Us II, Marvel's Spider-Man)
Erin Yvette (Hades II, The Wolf Among Us, Armored Core VI: Fires of Rubicon) \

MoistCr1TiKaL (Charles White)


Jacksepticeye (Sonic Prime, River City Girls 1 & 2, Bendy and the Ink Machine)
Travis Willingham (The Legend of Vox Machina, Critical Role, Lego Avengers)
Alanah Pearce (V/H/S Beyond, Cyberpunk 2077, Gears 5)
Lance Cantstopolis (Karate, Dancing, Actor)
Joel Haver (Filmmaker, Actor, YouTuber)
THOT SQUAD (Musician: Pound Cake, Hoes Depressed)
Yung Gravy (Musician: Betty (Get Money), oops!)
Matthew Mercer (Critical Role, Overwatch, Resident Evil 6)
and Jeffrey Wright (American Fiction, The Batman, Casino Royale)

I really did not expect to see that name here. Not exactly what I'd expect for a VA.

Yes, Steelheart was pretty enjoyable, though Dispatch plays the Superhero things much straighter.

I've become really addicted to my 3rd play through of Owlcat's Rogue Trader CRPG, staying up until 2am on work nights to play it. I'm doing this run as dogmatic priest and am very much enjoying the RP. I just wish the game had a more creative difficulty setting. I play on unfair and don't use an officer(gives lots of extra turns) and combat still only lasts 1-2 rounds. Meaning most builds are just about pumping for 1-2 turns of play knowing that any downsides from consumables/items/abilities will unlikely to affect the combat. The recent 1.5v update added some new talents for less common play styles and I love them.

I haven't gotten the new Arbites DLC but i hear its not very good, unlike the Void Shadows one which is excellent.

I haven't gotten the new Arbites DLC but i hear its not very good, unlike the Void Shadows one which is excellent.

I thought that Lex Imperialis was also excellent. The story is well done, has some very fun moments, and Solomorne is a great party member. YMMV though.

As a huge BG3 fan and off and on Warhammer painter, I picked up Rogue Trader a few months back but haven't really gotten into it. I feel like a lot of these games take a couple of hours of being confused by systems before they really grab you, and I haven't pushed through that yet (to my great nerd shame, I also wandered away from my PC after 45 minutes of Clair Obscur). Seems like Rogue trader is worth the effort to learn, though? Should I play with the DLC enabled for my first play through, and do you have any other relevant tips?

I enjoy it but yes there is quite the learning curve to push past. I'm not even truly degenerate about builds yet and I try to stay away from reading build guides as it sucks the fun out of it for me. The story is good, its fairly responsive to your choices. The romances feel great, the core set of characters have good arcs and potential. You can push your followers towards Chaos/dogmatic/humanism in ways that make sense. Overall it's a very enjoyable game.

Void shadows is a must. It seamlessly integrates with the core story very well. Technically the core story left side missions with references/hints prior to its release which makes it feel like it fleshed those out and made them immersive. The classes it adds are unfortunately very OP and very fun. 1.5v was a balance patch that mostly just hit them.

The gameplay tips if you are starting out is to abuse office mechanics via Cassia, you get extra turns on your heavy hitters allowing to scale up the needed buffs to be monsters. Late game they generally start fights with the buffs so its less relevant, but at low levels the power fantasy hasn't taken off yet.

Appreciate the advice! Other than this I'll try to go in blind, and we'll see if I succumb to the lure of build guides at some point.

The DLC integrates well into the main game, so I would enable it (and did for my first playthrough this past year). I'm not great at character building so I don't have a ton of tips, but one thing I found is that RT is very much a game of stacking buffs. 3% damage here, an extra attack there, and when you add them up the character becomes a killing machine. And speaking of extra attacks, look out for things that say they do not count against the one attack per turn limit. They are generally very powerful options to take.

I know that feeling. I’m reminded of the mod for D:OS2 which rebalanced combat to make health more relevant. They had to move heaven and earth to let it serve as a valid resource instead of a last resort.

But then, RPGs have always suffered from that tension. Real humans have a nasty habit of dying horribly when they take one bolter round to the face. Not easy to reconcile with slower, attrition-based gameplay.

My brain feels modded every time I read your handle. I keep seeing "nutsack" whenever I scroll past you. Do people ever call you that in multiplayer?

Oh DOS2, I have fond memories but yeah very much same feeling. I always hated how you pretty much had to spec your party towards one armor type strip or bust. I remember using the hell out of mods to try and fix it, make combat more interesting to some success but it was just a lot. I haven't tried modding Rogue Trader yet.

RPGs have always suffered from that tension. Real humans have a nasty habit of dying horribly when they take one bolter round to the face.

Funny enough this still happens with high level parties in Rogue Trader, which is part of the combat problem(on unfair). If you aren't alpha striking the enemy they are alpha striking you. I'm not sure what a satisfying system looks like. Thinking back idk if I've run into an rpg system that does it well.

EDIT: on further thought, its the power fantasy that probably causes the combat problem.

Progress is kinda slow, not because it's really difficult or boring but because there are so many items to inspect, notes and books to read, traps to spot and disarm, morals to ponder, battle decisions and build decisions to make.

Take your time; savor the journey.

Will do!

I almost beat Ys Chronicles 1. Got to the final boss, decided it was bullshit I'm too damned old for, and watched the ending on youtube before I started cussing at it.

I think I just should have played the game on easy instead of normal. It has some awkward difficulty spikes that after a few tries I was able to overcome, and for 99% of the game normal felt about right. But the final boss was just too much bullshit. Constantly drops meteors on your head to dodge, they explode into more bullets turning the scenario into a bullet hell. Then whenever you hit him, he deletes the part of the boss arena you were standing on when you did it. After 10 or so hits, while dodging all around the screen, while trying to chase him down as he's bouncing all over, you tend to find yourself boxed in.

Watching the final boss fight on easy, it took few enough hits for that to not be an existential problem. But you can't switch the difficulty, so if I want to beat the game on easy, I have to start over.

I'm just too damned old for that. Alas.

So I read The Master Mind of Mars, the 7th story in the Barsoom/John Carter series. It was aggressively mediocre. Each of these stories has pretty boring, one dimensional characters that are either all good and honorable or all unrepentantly evil and consummate liars. The redeeming quality of Barsoom is usually at least one, single good sci-fi hook or mystery, and some slightly above average action. The hook for this one was a scientist who can swap brains in bodies. It was not his best hook, and the by the numbers "Go and save a girl" story was only complicated by the fact that the hero was trying to save her body to put her brain back in it. Would not recommend it really.

By 1911, around age 36, after seven years of low wages as a pencil-sharpener wholesaler, Burroughs began to write fiction. By this time, Emma and he had two children, Joan (1908–1972), and Hulbert (1909–1991).[15] During this period, he had copious spare time and began reading pulp-fiction magazines. In 1929, he recalled thinking that:

"[...] if people were paid for writing rot such as I read in some of those magazines, that I could write stories just as rotten. As a matter of fact, although I had never written a story, I knew absolutely that I could write stories just as entertaining and probably a whole lot more so than any I chanced to read in those magazines."[16]

The mediocrity of the books makes sense in context, I think.^^

SM Stirling wrote a peculiar homage to that idea of Mars that's both cringe ('hero' getting saved by the princess yawn, strong womyn chars) and kinda awesome for the worldbuilding & and the unimaginable amounts of low-key heresy(explaining why the womyn is so strong) for which he wasn't cancelled. Spoilers on the link. A good read I think, Stirling can write adventure stories just fine.

Court opinion:

  • A particular business has been operating since year 1902, first as a fruit-and-dairy farm, later as only a dairy farm, and now as a timber farm. It consists of 1100 acres (1.7 mi2, 450 ha, 4.5 km2).

  • In year 2018, the business pays 112 k$ for a used Mercedes-Benz G-Class SUV. It claims a sales-tax exemption since the vehicle will be used in farming. But in year 2020 the department of taxation disagrees and imposes a penalty. (1) The exemption requires that the business be engaged in farming. However, despite claiming to be a timber farm, this business has never actually sold any timber, and indeed has reported no sales, income, or labor expenses since year 2011. (2) The exemption requires that the vehicle be used directly in farming. However, this vehicle is used merely to transport people and equipment through the forest, not for a farming activity like plowing. (3) The exemption requires that the vehicle be used primarily in farming. However, the business failed to keep mileage logs as proof of how the vehicle was used, and even involved the vehicle in a minor crash outside a post office outside the forest. In year 2024, the board of tax appeals affirms.

  • In year 2025, the state supreme court reverses. (1) The business has implemented a forest-management plan and has spent thousands of dollars on hiring contractors to remove invasive species that can damage the trees. Since trees take decades to mature into harvestable timber, this is enough to show that appellant is engaged in farming even in the absence of much activity at the moment. (2) "Property may qualify as being used in farming even though it is used to perform an intermediate step in the process of producing crops." "Just as a tractor provides the means for conveying a plow through a field where it can act upon the ground, the vehicle in this case provides the means for conveying chainsaws, marking tools, herbicides, and workers through [appellant's] forest." And the word "directly" is not in the statute. (Wikipedia describes the G-Class as a luxury vehicle, but the business in this case testified that it combined the off-road capability of a Jeep Wrangler with the cargo capacity of a Chevrolet Silverado, and both of those properties were needed in the forest.) (3) Mileage logs are not required by the statute. The business testified that farm-related use of the vehicle was around 95 percent, and that testimony was not rebutted by the department of taxation, so it stands.

Claugus! I never thought I'd see that name on here! When I was doing oil and gas law I spent months working on the Claugus 1 unit and did title reports for several parcels comprising the farm in question. Those were the days.

However, despite claiming to be a timber farm, this business has never actually sold any timber, and indeed has reported no sales, income, or labor expenses since year 2011.

Correction: it sold about $490K of timber during Murphy’s tenure as forester, which ran until 2008. Without that fact, the operation sounds a lot sketchier!

Mileage logs are not required by the statute. The business testified that farm-related use of the vehicle was around 95 percent, and that testimony was not rebutted by the department of taxation

Insane they didn't fight on this point

How? There are no mileage logs or (AFAICT) hard evidence of another sort. It would be pure he-said-she-said where one party is literally and completely ignorant.

Yeah if there's no milage logs for the thing you are claiming to be a taxable expense then you get fucked and can't claim it as a taxable expense?

Putting aside the cartoonish scenario of "oh yeah we totally need the g-wagon for our tree farm", my understanding of tax law has always been you are guilty until proven innocent and they onus is on you to prove it.

If you don't do your homework, if you don't have logs, etc, you get fucked

I'm not saying I agree with this, I'm just saying I'm surprised the IRS didn't lean on this harder in court. They really need to have logs.

Like if you can just show up to court and say "but it's just he said she said so why even bother having logs your honor?" and then win, why does anyone keep logs for vehicles ever?