site banner
Advanced search parameters (with examples): "author:quadnarca", "domain:reddit.com", "over18:true"

Showing 25 of 191 results for

domain:alethios.substack.com

North Korea also has nukes, and I imagine an Israel without American support would, in the best case scenario, look a lot like North Korea.

Except I doubt the upper echelons of Israeli society would tolerate living in North Korea, so it probably would simply cease to exist like South Africa, another country whose nukes were of little use.

First off, does Hamas really care about what happens to Assad or Iran? They take Iranian weapons but they also backed the Syrian rebels against Assad, they aren't exactly a full on proxy of Iran like Hezbollah. If anything the fact that Iran was ultimately dragged into the fight despite desperately trying to stay out of it directly is a Hamas W.

Second, the damage to the AoR seems pretty overblown:

  • Hezbollah is in the same position it was in 2006, with a nominally one sided ceasefire and a hostile Lebanese government forcing them to lay low temporarily, yet they still maintain total control over southern Lebanon
  • Houthis are stronger and more influential than ever, successfully shut down the port of Eilat and collect hundreds of millions if not billions from holding up passing ships
  • Iran survived Israel's best shot at regime change and responded with enough missiles to break Israel's missile shield and deplete it's interception capacity down to nearly 50%

Syria is a real loss but Assad was always the weakest link and his fall had more to do with his own incompetence than Israeli brilliance, otherwise they would have rolled southern Lebanon the way Al-Jolani rolled Syria.

you can buy them much cheaper than this (cw: anti-endorsed).

Guarantee those specs are totally fake. You're just buying an the guts of an absolute dogshit chinese dash camera crammed into the shell vaguely in the shape of glasses.

The .win family kinda tried that, branching out from The Donald to some other rightish culture war subreddit bunkers, but it's difficult to call the results a success.

I actually really like the idea of camera glasses that are always on, so I can capture cool moments that I see. Because too often I try to fish out my phone and it's already over. I actually got the snapchat snaptacles (which were almost exactly the same concept) back in the day but they were absolutely garbage to use.

The problem right now actually isn't cultural, but tech. Think of the amount of battery life a gopro gets - latest models get 2-3 hours recording at 1080p, and the unit is quite bulky. There's also the issue of overheating which is sometimes a complaint for gopros. Now try to cram all that into a tiny wearable that you plan on wearing for all waking hours.

It's just not possible to make camera glasses that people actually want to use.

Sorry for not giving this earlier, but for opaque targets covering a large portion of the target zone, after throwing a Kalmin filter in, I've been typically getting within a half-centimeter pretty much the whole range (2cm - 4m). Reflective or transparent targets can be less good, with polycarbonate being either much noisier or being consistently a couple cm too far.

Big problem's where a zone is only has small objects very near -- sometimes this will 'just' be off by a centimeter or two more (seems most common in the center?), and sometimes it'll be way far by meters. That's been annoying for the display 'logic', since someone waving their hand at the virtual display is kinda a goal.

Dunno if it would be an issue for a more conventional rangefinder use, though the limited max range and wide field-of-view might exclude it regardless.

There's a reason that I specifically excluded visual-light cameras from my display glasses project. Camera glasses have been around for a while, and you can buy them much cheaper than this (cw: anti-endorsed). We mostly just kitbashed the 'must play shutter sound' rule onto cell phone cameras and pretended it was okay, and maybe Google could have gotten away with normalizing this sorta thing culturally back in 2012 with the Glass, but today?

Forget the metaphors about concealed carry; in the modern world, this is more like having a gun pointed at whoever you're looking at, and everybody with two braincells to rub together knows it. There's a degree this is a pity -- you can imagine legitimate use cases, like exomemory or live translation of text or lipreading for captioning or yada yada, and it's bad that all of those options are getting buried because of the one-in-a-thousand asshole.

The bigger question's going to be whether, even if this never becomes socially acceptable, it'll be possible to meaningfully restrict. You can put a norm out to punch anyone who wears these things, but it's only going to get harder and harder to spot them as the tech gets better. The parts are highly specialized, but it's a commodity item in a field whose major manufacturers can't prevent ghost shifts from touching their much-more-central IP. The sales are on Amazon, and while I can imagine them being restricted more than, say, the cables that will light your house on fire, that just ends up with them on eBay. Punishing people who've used them poorly, or gotten caught, has a lot more poetry to it... and also sates no one's concerns.

As for why some prominent AI scientists believe vs others that do not? I think some people definitely get wrapped up in visions and fantasies of grandeur. Which is advantageous when you need to sell an idea to a VC or someone with money, convince someone to work for you, etc.

Out of curiosity. Can you psychologize your own, and OP's, skepticism about LLMs in the same manner? Particularly the inane insistence that people get "fooled" by LLM outputs which merely "look like" useful documents and code, that the mastery of language is "apparent", that it's "anthropomorphism" to attribute intelligence to a system solving open ended tasks, because something something calculator can take cube roots. Starting from the prior that you're being delusional and engage in motivated reasoning, what would your motivations for that delusion be?

I don't think anything in their comment above implied that they were talking about linear or simpler statistics

Why not? If we take multi-layer perceptrons seriously, then what is the value of saying that all they learn is mere "just statistical co-occurrence"? It's only co-occurrence in the sense that arbitrary nonlinear relationships between token frequencies may be broken down into such, but I don't see an argument against the power of this representation. I do genuinely believe that people who attack ML as statistics are ignorant of higher-order statistics, and for basically tribal reasons. I don't intend to take it charitably until they clarify why they use that word with clearly dismissive connotations, because their reasoning around «directionality» or whatever seems to suggest very vague understanding of how LLMs work.

There's an argument to be made that Hebbsian learning in neurons and the brain as a whole isn't similar enough to the mechanisms powering LLMs for the same paradigms to apply

What is that argument then? Actually, scratch that, yes mechanisms are obviously different, but what is the argument that biological ones are better for the implicit purpose of general intelligence? For all I know, backpropagation-based systems are categorically superior learners; Hinton, who started from the desire to understand brains and assumed that backprop is a mere crutch to approximate Hebbian learning, became an AI doomer around the same time he arrived at this suspicion. Now I don't know if Hinton is an authority in OP's book…

of course I could pick out a bunch of facts about it but one that is striking is that LLMs use ~about the same amount of energy for one inference as the brain does in an entire day

I don't know how you define "one inference" or do this calculation. So let's take Step-3, since it's the newest model, presumably close to the frontier in scale and capacity and their partial tech report is very focused on inference efficiency; in a year or two models of that scale will be on par with today's GPT-5. We can assume that Google has better numbers internally (certainly Google can achieve better numbers if they care). They report 4000 TGS (Tokens/GPU/second) on a small deployment cluster of H800s. That's 250 GPU-seconds per million tokens, for a 350W TDP GPU, or 24W. OK, presumably human brain is "efficient", 20Wh. (There's prefill too, but that only makes the situation worse for humans because GPUs can parallelize prefill, whereas humans read linearly.) Can a human produce 1 million tokens (≈700K words) of sensible output in 72 minutes? Even if we run some multi-agent system that does multiple drafts, heavy reasoning chains of thought (which is honestly a fair condition since these are numbers for high batch size)? Just how much handicap do we have to give AI to even the playing field? And H800s were already handicapped due to export controls. Blackwells are 3-4x better. In a year, the West gets Vera Rubins and better TPUs, with OOM better numbers again. In months, DeepSeek shows V4 with a 3-4x better efficiency again… Token costs are dropping like a stone. Google has served 1 quadrillion tokens over the last month. How much would that cost in human labor?

We could account for full node or datacenter power draw (1.5-2x difference) but that'd be unfair, since we're comparing to brains, and making it fair would be devastating to humans (reminder that humans have bodies that, ideally, also need temperature controlled environments and fancy logistics, so an individual employed human consumes like 1KWh at least even at standby, eg chatting by the water cooler).

And remember, GPUs/TPUs are computation devices agnostic to specific network values, they have to shuffle weights, cache and activations across the memory hierarchy. The brain is an ultimate compute-in-memory system. If we were to burn an LLM into silicon, with kernels optimized for this case (it'd admittedly require major redesigns of, well, everything)… it'd probably drop the cost another 1-2 OOMs. I don't think much about it because it's not economically incentivized at this stage given the costs and processes of FPGAs but it's worth keeping in mind.

it seems pretty obvious that the approach is probably weaker than the human one

I don't see how that is obvious at all. Yes an individual neuron is very complex, such that a microcolumn is comparable to a decently large FFN (impossible to compare directly), and it's very efficient. But ultimately there are only so many neurons in a brain, and they cannot all work in parallel; and spiking nature of biological networks, even though energetically efficient, is forced by slow signal propagation and inability to maintain state. As I've shown above, LLMs scale very well due to the parallelism afforded by GPUs, efficiency increases (to a point) with deployment cluster size. Modern LLMs have like 1:30 sparsity (Kimi K2), with higher memory bandwidth this may be pushed to 1:100 or beyond. There are different ways to make systems sparse, and even if the neuromorphic way is better, it doesn't allow the next steps – disaggregating operations to maximize utilization (similar problems arise with some cleverer Transformer variants, by the way, they fail to scale to high batch sizes). It seems to me that the technocapital has, unsurprisingly, arrived at an overall better solution.

There's the lack of memory, which I talked about a little bit in my comment, LLM's lack of self-directed learning

Self-directed learning is a spook, it's a matter of training objective and environment design, not really worth worrying about. Just 1-2 iterations of AR-Zero can solve that even within LLM paradigm.

Aesthetically I don't like the fact that LLMs are static. Cheap hacky solutions abound, eg I like the idea of cartridges of trainable cache. Going beyond that we may improve on continual training and unlearning; over the last 2 years we see that major labs have perfected pushing the same base model through 3-5 significant revisions and it largely works, they do acquire new knowledge and skills and aren't too confused about the timeline. There are multiple papers promising a better way, not yet implemented. It's not a complete answer, of course. Economics get in the way of abandoning the pretrain-finetune paradigm, by the time you start having trouble with model utility it's time to shift to another architecture. I do hope we get real continual, lifelong learning. Economics aside, this will be legitimately hard, even though pretraining with batch = 1 works, there is a real problem of the loss of plasticity. Sutton of all people is working on this.

But I admit that my aesthetic sense is not very important. LLMs aren't humans. They don't need to be humans. Human form of learning and intelligence is intrinsically tied to what we are, solitary mobile embodied agents scavenging for scarce calories over decades. LLMs are crystallized data systems with lifecycle measured in months, optimized for one-to-many inference on electronics. I don't believe these massive differences are very relevant to defining and quantifying intelligence in the abstract.

Google glass was tried like a decade ago. This is just that, incognito, with less features, right?

To me it seems kinda lame, and POV video sucks.

Hopefully this exchange isn't too tedious to you. I have obviously not gotten as deeply into continental philosophy as you have, so I hope this doesn't feel like explaining the concept of addition to an infant.

Oh, not sure why you removed the Paul Klee section, I was going to comment on it...

The reason why I removed it is precisely for the reason you stated: he is an artist and not a philosopher. I quoted him initially because IIRC Adorno was influenced by Klee's art and writings, but later decided that it would just be better to quote Adorno himself instead of doing so indirectly through the writings he was influenced by.

Almost all the specific books I've recommended throughout this thread are approachable and can be read like any other book, and they do make coherent sense, such that you could explain them to analytic philosophers without too much trouble.

I have been working my way through The Aesthetic Dimension and already have quibbles with the approach just a small amount of the way in. Perhaps this is a mistake and perhaps I should read more before I comment, but:

On Page 2 Marcuse enumerates the following tenets of Marxist aesthetics: Art is transformed along with the social structure and its means of production. One's social class affects the art that gets produced, and the only true art is that made by an ascending class; the art made by a descending class is "decadent". Realism corresponds most accurately to "the social relationships" and is the correct art form. Etc.

Marcuse's critique is that Marxism prioritises materialism and material reality too much over the subjective experiences of individuals, and that even when it tries to address the latter its focus is on the collective and not the individual. The Marxist opinion of subjectivity as a tool of the bourgeoisie, in his opinion, is incorrect and in fact "with the affirmation of the inwardness of subjectivity, the individual steps out of the network of exchange relationships and exchange values, withdraws from the reality of bourgeois society, and enters another dimension of existence. Indeed, this escape from reality led to an experience which could (and did) become a powerful force in invalidating the actually prevailing bourgeois values, namely, by shifting the locus of the individual's realization from the domain of the performance principle and the profit motive to that of the inner resources of the human being: passion, imagination, conscience."

This claim doesn't feel meaningful to me. Subjectivity could and did become a powerful force in challenging the bourgeoisie? Would be nice to get some examples of this, but I doubt he has any concrete ones. The topic of whether focusing on one's inner world invalidates or bolsters bourgeois values is not really amenable to systematic inquiry. But I would say a person's "inner experience" is very complex, kind of nonsensical and pretty much orthogonal to any political or social system you could put in place, and as such it will never map onto anything that could exist in reality (and that includes Marxism), that's not specific to aspects of capitalism like the performance principle and profit motive. The bureaucratic machinations of a central planner are just as alien to it as decentralised market-based allocation and the incentives it creates.

I guess I can somewhat legibly interpret it if I assume the truth of the critical theorist belief that their ideas are uniquely liberating, but I think that their proscriptions for society are just as artificial as anything that came before. Human emotional experience is so disordered and contradictory that expecting it to align with any model of social organisation is a mistake. People are a hodgepodge of instincts and reflexes acquired across hundreds of millions of years of geological time, some of which are laughably obsolete; it won't agree with any principle at all. Hell, it's not even compatible with granting people liberation, whatever that means. Even if you wave a magic wand and give people full freedom the expression of their instincts will often inherently conflict with the wishes of another, and in addition humans get terrified when presented with unbounded choice, and make decisions that don't maximise utility for themselves. The full realisation of human desires is an impossible task. It will always be stultified in some way or another.

This is, to me, a good example of what I said before: "You read it, you feel like it is true or profound in some deep unarticulable way, and follow the author down the garden path for that reason alone." I can't really reason my way into the conclusion that Marcuse has reached here, and in fact the more I think about that passage the less comprehensible I find it to be. The Lacan passage seems similar, but I have not read it in full context yet so I won't judge. But the reason why analytic philosophy tends to be restricted in its scope compared to continental philosophy is because there are rules that govern what can be legibly said within that philosophical framework.

I suppose I want and need a lot more substantiation and rigour in my academic work than what many of these writers are capable of offering. If you look at my post history, that becomes very clear; I think I demand it more than even your average Mottizen does.

The sheer amount of surgical techniques, mechanical/robot assistance, and drug development alone. Not to mention computerization and millions of other improvements neither of us know about too.

I worked in medical device development early in my career. Its not that these are not very impressive technological innovations, it is that people were perfectly capable of living to their 80s in 1776, and the reasons so few did had largely been addressed by the 50s. Lots of development has been in surguries. I'd much rather have surgery now than in 1955.

Ill freely admit I am a bit biased, my work was in life saving pediatric implants, which is not nearly the size of the "relieve grandpa joe's pain a little bit" part of the industry.

What do folks on the Motte think of the "Waves" glasses? Here is the link, quoting the short tweet:

introducing Waves, camera glasses for creators.

record in stealth. livestream all day.

pre-order now.

The idea seems to be another in the long string of VC-funded tech companies who seek to make their name on being controversial in the beginning, and slowly becoming socially accepted. It's extremely frustrating that this profit model seems to work, but we can't deny it does (some of the time) at this point.

On the one hand I'm deeply incensed at the thought of other people recording me without my consent. On the other hand... we already waived these rights two decades ago with the Patriot Act, effectively allowing the government and major corporations to spy on us all the time with no repercussions. I personally find it hard to be sympathetic to outrage against these glasses when our nation's legal system has completely bankrupted any notion of a personal right not to be filmed anyway.

I'm not sure which side of the culture war this benefits either. As it stands, it seems a pretty predictable evolution of trends we've been seeing in privacy and technology for a while in the West.

I have no idea why you are nitpicking me so hard over the fact I didn't say "when safe". Yes, of course it's when safe. But the same is true for a red light too. You aren't expected to stop the instant a light turns red, because that would be impossible and unsafe in some situations. Yet I don't think you would nitpick someone for saying "it's illegal to enter the intersection when the light is red", because everyone understands the "if it's safe to stop" implication. So don't nitpick me for using similar language about yellow lights, it's a weirdly isolated demand for rigor.

Kind of a tangential question but

What’s the source on high IQ people becoming less attractive looking? I’ve only ever read that it’s positive for men and neutral for women

hahaha welcome to imposter syndrome sucker!

but no really, overindexing on all of the ways you think you suck and how crazy your brain is is generally a sign of intelligence and competence

not that you'll believe any of this

What? It’s very obviously not illegal to enter an intersection with a yellow light. The light changes from green to yellow with no warning. There are situations where it is physically impossible to brake that fast. I assume you mean “when safe?” But that gives a lot of cover to the defendant.

I’m personally more familiar with the implicit law, which is that yellows are timed such that they stay on long enough for drivers going a reasonable speed to come to a complete stop while braking comfortably before it goes red. So when the light changes, you either don’t have enough time to brake comfortably and smoothly pass the yellow before it turns, have enough time to stop and do so, or break the law by either running a red or jam on the gas to get through - which is, of course, both speeding and reckless driving.

Reading the opinion, Russell was driving above the 55mph speed limit. I’ll allow that his speed was more like 70 than it was 60. He was apparently 200ish feet from the intersection when he noticed the yellow. If so, that’s on the order of 2 seconds to come to a complete stop, unless I’m doing my math wrong. 55 gives you another half second. That’s a slam on the brakes situation, not a reasonable halt. At that point, it seems like either Russell was derelict in not watching for the light until too late, or else he could not stop safely even at the posted limit when the light turned and was totally in his rights to proceed. I’m surprised this doesn’t show up in the opinion. Were they expecting him to burn rubber because it flicked yellow?

@ToaKraka ‘s summary is outright incorrect in one place, in fact, and the truth makes the situation even more redeeming for Russell. The summary says that Jasmine was stopping at the red. The opinion says that SHE WAS ENTERING THE INTERSECTION BECAUSE SHE DID NOT BELIEVE SHE COULD STOP SAFELY, and at time of the crash, was ABOUT TO ENTER THE INTERSECTION (presumably yellow at the time). So why is Russell more at fault here for entering an intersection which the plaintiff was herself entering even later? Reading the opinion, they keep talking about the plaintiff being a young mother and go into great detail on the injuries. I suspect that’s the reason, and perhaps also that they didn’t expect the ex-con who actually caused the crash to be able to pay a cent.

If I were on this jury I’d probably hang it. This looks a hell of a lot like a miscarriage of justice to me. The appeal court, I judge less strongly. They’re right to defer heavily to the jury. But putting 60% on Russell seems crazy. Splitting in reverse would make more sense. But given that the appellate opinion states that the decision hinges in part on the fact that Russell did not testify mitigating factors like whether he considered whether he could stop safely, I wonder whether this whole mess is just the product of a lawyer gap between the parties.

EDIT: spent a minute looking at car crash videos to try and gauge how fast Russell might have been traveling in order to absolutely crush the woman’s car. Assuming he was traveling at 70 and lost half of his momentum hitting the truck, he and she would have collided at a combined speed of 80mph. 55mph crashes with a stationary object are enough to start compromising the cabin. 80 is, as far as I can tell, kill you dead territory. Bringing this down to 70 would probably still be enough. So I’m not sure that the prosecution’s assertion that he must have been driving in safely holds water. But of course that’s right back to the question of whether the lawyers brought proper receipts on the basic math here. Messy stuff, honestly makes highway driving sound a lot less appealing.

I mean obviously there is a major qualitative difference between an LLM's capabilities and Google's. I don't even think Google counts as knowledge work, because it's just a fancy directory with a math formula to rank pages. The alternative was basically a directory or keyword search, neither of which require knowledge work either to assemble or run. And critically, Google is free to use, so it's an exceptionally poor example to choose.

Just the ability of an LLM to summarize documents that you feed it is already enough, in my mind, for it to count as a kind of knowledge work. I hinted at that phrasing for a reason, if you check wikipedia's entry for "knowledge workers" you'll see that it's more or less people who are thinking for a living, and reducing the job to simply that of "looking the right stuff up" is significantly underselling it. A lawyer for example is not merely an information processing algorithm, even if her job may be primarily finding the relevant court case precedents and then applying them in systematic fashion to partial boilerplate motions and filings. It takes a degree of contextual understanding along with a degree of judgement to produce the proper output, and those elements are missing from Google entirely (at least in its traditional and early iterations, since the precise algorithms are highly proprietary, but I don't think this changes the core categorization)

Muslims circumcise themselves as well. Muslim Palestinians are just as circumcised as Jewish Israelis, so that doesn't function as a tribal distinction any more.

We're already in a world full of violent, sadistic grapists. How much sillier can it get?

The VG movie (actually I lied there are actually two but one is more a side-story) is actually more of an epilogue of sorts, so I'd strongly recommend watching or trying the series first instead. IIRC the show starts to truly get going by the third episode (and if you're the impatient type you could honestly start there and be OK), but the most memorable and highest rated ones are a little back-loaded in the season.

I've tried a few episodes of Dungeon Meshi but it didn't really hook me, so I can't speak to the praise there.

However, with Frieren I'd say two episodes is the minimum to get a proper feel (concludes a bit of a mini-arc), although the story (such as it is) doesn't properly take shape until the fourth episode and we don't meet the last major companion travelling with Frieren until the fifth, so although I'd still consider it excellent it is very much a slow burn, contemplative kind of show. With that said, it makes the smaller pieces of action even more memorable, but they are still sparse. Somewhat famously, in episode seven or so, we find out that although the show doesn't have an actual big bad (the major plot after all is that she already helped save the world) there are still a few demons out and about that didn't get defeated along with their leader. These demons are worse than irredeemable (in fact they pretend to have emotions and feelings to disguise their true identity as pure predators of humanity) which at least in anime terms is a bit of a trope reversal.

None of that is to say that a certain minimum is required for most shows, but you know how it is, "will I like this" is a tricky question to answer anime or no. One of the only anime where I'd consider it truly mandatory is My Star (Oshi no Ko) where the first episode is a full hour or so on purpose, knowing that you need the full time for it to make sense (also a fun show, about the dark side of the entertainment/movie industry, saying more is a spoiler) because of said major spoiler that changes the course of the show entirely occurring at the end of it. I think Madoka Magica is classically the other, where episode 3 or so has a major twist, but I haven't seen it myself so I couldn't say.

For the demand thing, it's not like the first time I tell her to do something causes a melt down. If it was that clear-cut, it would probably be easier to figure out. I can tell her to put on her shoes 10 days in a row and on the 11th day she panics, keeps taking off and putting her socks on, runs away, something weird.

And it can be asking her to do something she wants to do. There are lots of times where I plan something nice for her, something she's familiar with and knows she likes, and then when the time comes to do it she starts to act scared without being able to articulate why. "Something bad is going to happen." No, why would you think that!

Now that I have PDA in mind, it has been helping to understand some things. In Bluey, there is an episode where there's a "Magic Stuffed Animal" who makes the Dad do whatever the kids say. Kind of like Simon Says. My 6 year old and my 7 year old started playing that game together. My 7 year old was really into it for a few minutes, and then suddenly reacted violently to the stuffed animal. Before it would have been exhibit #100 of what a weird child she is. Now I'm like, "Maybe A shouldn't play that game."

Said no competent engineer outside software engineering.

Oh man. In contrast, I'm constantly juggling work from multiple clients and find myself exhausted when the weekend rolls around, yet I still get the sense that I'm not doing enough/working fast enough/taking on as many new jobs as I should. I'm a tax accountant, and most of what I do is annoyingly detail-oriented work where even the smallest slip-up can attract the attention of the tax office and negatively impact a client (even when the problem was caused by the tax office themselves in the first place, yes they fucking suck and I could write a whole essay about how shit they are). The regulatory landscape also constantly changes. The staff are assigned production targets to meet, and whether one can do so or not hugely impacts on evaluations of their performance. Towards the end of the week I find my ability to concentrate goes to shit; one can only maintain proper executive functioning for so long, and I wasn't extremely good at that in the first place.

The kind of people this job attracts are of a certain breed. My manager recently had to rush over to China because her grandmother was dying of cancer, and even when she was on leave there she was still responding to work emails every now and then. I don't think I'm cut out for this level of grind in a job, and as a result constantly feel like I'm going to get fired. I spend the weekend not working on hobbies or doing anything I actually like but just recovering, or doing some extra work that I don't record on my timesheet in order to make my efficiency look better (then struggling through the following work week while cursing my life). My hobbies have fallen by the wayside, I don't read nearly as much, and my engagement on TheMotte has nosedived as a result. I wish my job was more chill.

That would require me to examine my preferences in depth, and I'm not sure I'll produce an acceptable reason to anime fans. Basically it's never been a genre I've sought out. I appreciate manga for the artwork, and it drives me nuts seeing people now scroll so quickly through the panels. Short answer is probably I'm old.