site banner

Friday Fun Thread for March 31, 2023

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

2
Jump in the discussion.

No email address required.


S A T O R 

A R E P O 

T E N E T 

O P E R A 

R O T A S

This is an interesting palindrome from the first century. It is called the Sator Square. It found its way eventually to every European country, as well North Africa. Every word in this Palindrome is a common Latin word when read either left to right or right to left, with the exception of the backwards reading of opera which reads as arepo, having no other mention in Latin but may be construed as a proper name. Uniquely, this Palindrome is not just left to right and right to left, but up and down and down to up.

Its text means, when read as a book, “the farmer, Arepo, masters the wheel with effort”, or “Arepo the farmer holds to the work wheel”, or something along these lines. Speaking of lines, it may be read in the boustrophedon way, in alternative direction, like how a farmer plows his field, which is fitting and common in old Roman inscriptions, and this may render the meaning as “the sower sustains the works, the sower keeps the works”, though this fails as a meaning-based palindrome when read backwards. This is akin to “as you reap, so you sow”, which has a long-standing mythical and spiritual meaning for the Eleusinian Mysteries and other allegorical Roman stories and rituals, and later Christianity which synthesized many of these allegories.

Historians have debated what it means and its purpose, but historians are also dumb, and frequently ignore the forest by analyzing the trees. The fact that this is a four-way Palindrome, and that it has a normal and mundane meaning in Latin, and that it is applicable to the majority who worked in agriculture while having a vaguely moral message, is all the meaning necessary to explain its wide use and transference. Asking for Arepo to also hold meaning is asking too much, as he’s already holding the wheel.

An interesting discovery was made in the 20th century. A rearrangement of the square leads to an intersecting “cross” of the phrase Pater Noster, united by the N in the middle, with two alphas and two omegas as a remainder. This is certainly just an accidental fun additional Christological reading. There’s no evidence that our ancestors knew or conceived of this in previous centuries.

Because historians are dumb, they cannot accept that the four-way palindrome was popularized and utilized by virtue of being a really good four-way palindrome. They are looking for additional reason for no reason. Humans love patterns. Reading this palindrome square gives my mind a little intellectual orgasm every time I look at it. The surprise at the quad-directionality of the square every time I see it gives me a bit of pleasure. So it did to the ancients. No additional reason is required, and in fact an additional meaning is statistically improbable, bordering on impossible, for a cute symbol that is already packed with meaning and pattern. I’m reminded of listening and hearing Bach, and music theorists see all the wacky patterns he created in his works as kinds of musical puzzles, and they ask “what greater numerological meaning does it have?” It does not, as the puzzle is the meaning.

Oh, this takes me back. My great grandma was quite religious, but not very discerning in her tastes, so she subscribed to "Science and Religion", a magazine that was full of esoteric stuff like Sator square. I remember that I loved leafing through them every summer, even though that was more than a quarter century ago.

I wonder why no one has come up with a better four-way palindrome since then. Can someone ask GPT-4 to make one?

Also, the website of the National Puzzlers' League is delightfully Web 1.0.

GPT-4 gave me this after a few tries:

“Creating a 5x5 word square with proper meanings in English is quite challenging, and there aren't many known examples. However, I can provide you with a 5x5 Latin word square, similar to the Sator Square:

R E G I S

E V E N I

G I V E R

I N E R T

S I T E R

In this square, the words are:

REGIS (of a king)

EVENI (to come forth)

GIVER (the giver)

INERT (inactive)

SITER (they may be)

These words have proper meanings in Latin. Keep in mind that, like the Sator Square, the meanings may not form a coherent sentence, but the individual words have meanings.”

Pretty awesome that it parsed my meaning, said hey that’s hard, but here’s something in another language. This stuff is damn impressive.

The actual result is not actually that impressive, though. "giver" is plain Germanic English, "inert" is of Latin origin but anglicised (in Latin it would be iners, with the -t- in the stem only appearing in some flexed forms), and "siter" is not any sort of valid Latin form to my best knowledge ("they may be" is sint), though it's interesting that it leveraged some correct knowledge that si(t)- occurs in subjunctive forms of "to be" in Latin.

Ahh good to know. Yeah still a long way to go with these models. I’m surprised at all the hype given their frequent hallucinations.

For what it's worth, I think the hype is completely justified, and the concern over hallucinations is driven by a combination of the usual motivated reasoning trying to defend human uniqueness, and not understanding what problem has actually been solved. The LLM's unfiltered train of thought should not be compared to a human's well-considered sayings/writings, but to a human's unfiltered train of thought, and I'd be extremely surprised if anyone out there has an inner monologue that is as coherent and intelligent as GPT-4's output. The problem of letting an LLM "think before opening its mouth" has not quite been (publicly) solved yet; but, crucially, there is no obvious reason to believe that the remaining gap (from being able to have a coherent, insightful inner monologue, to shaping the monologue to consider when and what to speak and thereby achieve correctness) is at all difficult to close. We have moderate success with teaching this to human bullshit artists who say the first thing that comes to mind and then make up rationalisations for it after the fact, too.

I like the way you put that, and it’s not something I’ve considered before. What sort of timeline do you have for AGI?

Depends on the definition you use. "Better than the (pointwise) median human at almost every fair test of intellectual ability" (that is, you can't ask it subtle rarely-discussed details of growing up as a meatbag and declare victory when it can't), I'd put at approx. 5 years with a 90% confidence interval of 0 (that is, someone already has built it) to 15, conditional on no significant change to the conditions of development (say, full-scale CN-US war, or a somewhat successfully prosecuted moratorium on research). "(pointwise) better than every living human at every such test" at approx. 20, 90% confidence interval 5 to 50, again conditioned on the same. Caveat for the latter is that I think that this conditioning actually cuts out a lot of the probability space; I'm thinking maybe 75% that something I would count as a significant change happening before we get AGI (second definition).

Interesting. Most people seem to think once we get AGI we’ll speed to super intelligence very quickly, why the 15 year gap? You don’t believe in self recursive improvement?

More comments

Yes, I'm really glad to see someone else point this out! One thing that's interesting about LLMs is that there's literally no way for them to pause and consider anything - they do the same calculations and output words at exactly the same rate no matter how easy or hard a question you ask them. If a human is shown a math puzzle on a flashcard and is forced to respond immediately, the human generally wouldn't do well either. I do like the idea of training these models to have some "private" thoughts (which the devs would still be able to see, but which wouldn't count as output) so they can mull over a tough problem, just like how my inner monologue works.

You can kinda do this in chatGPT - ask a question as a chain-of-thought prompt, then a follow up asking it to extract the answer from the above.

Experimenting with giving ChatGPT-4 a more structured memory is easy enough to do that individuals are trying it out: https://youtube.com/watch?v=YXQ6OKSvzfc I find his estimate of AGI-in-18-months a little optimistic, but I can't completely rule out the possibility that the "hard part" of AGI is already present in these LLMs and the remainder is just giving them a few more cognitive tools. We're already so far down the rabbit hole.

This reminds me of "blank palindromes", a gag one Russian writer came up with. Blank verse looks like verse, but doesn't rhyme, so blank palindromes look like palindromes, but don't read the same when reversed. This is a blank word square.

Spoonerisms are really common in Finnish, so people have also played around long time with what one might term "blank Spoonerisms" according to this schema, as well.

Yep I called it out and it said:

Larger palindromic word squares, such as 4x4 or 5x5, with proper meanings in English or Latin are very rare and difficult to construct due to the constraints of the palindrome property and the limited availability of suitable words. The Sator Square remains a unique and famous example of a 5x5 palindromic word square.