site banner

Small-Scale Question Sunday for March 19, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

All CPUs are roughly 3.5 Ghz

That hasn't been true for a long time now, at least as far as desktops are concerned. That whole "base clock/boost clock" thing is just marketing; if the CPU's "boost" clock is running all cores at 4.5GHz, provided you've kept it cool enough it will run 4.5GHz with the same lifespan you'd expect from a CPU that does not boost (and if you fail to keep it cool, it will fall below its base clock to protect itself). It's not "overclocking" if it's listed on the nameplate (and overclocking is all but dead these days for that reason).

It's slightly misleading that those "maximum boost" clocks are talking about how fast a single core can be run and you have to dig for the all-core "boost" clock to get how fast that CPU will be running most of the time. AMD's fastest part can do 5.1GHz on all cores with a single core on the die able to reach 5.7GHz; Intel will have a part that can run a single core at 6 GHz (no word on all-core, but it'll probably be 5.2 or so up from their current 4.8).

The "base clock" number still does have a use, though, but that's more for predicting the performance of pre-built machines whose builders failed to give them adequate cooling- so even with a stock cooling solution that can only dissipate 120W that's the lowest constant speed you can expect (they'll go to maximum boost until they throttle back to avoid overheating). Both Intel and AMD's CPUs pull twice their rated limits at maximum.

How do you tell whether a CPU is better or worse (other than whether it's more or less expensive)?

Generally speaking, its generation and its maximum all-core speed tell the vast majority of the difference you care about. There are a few complicating factors (especially in Intel's case) but as a general rule of thumb, a CPU from the same market segment but one generation previous at the same clock speed runs 10% slower than a CPU from the current generation. Some generations have much larger leaps (Intel 11th to 12th, AMD Piledriver to Zen), and there are application-specific things that can complicate this (like AMD's X3D models and gaming) but on average that's what you can expect.

Usually I just read the benchmarks (Anandtech, Tom's Hardware, Phoronix, and LTT) in real-world-ish tasks, like "how long does it take to compile Chrome", and draw my conclusions on how much faster the newer generation is based on that. But you're not going to notice the difference between a CPU from 2008 and a CPU from 2018 until you ask them to do something interesting (provided both machines are using the same SSD)- as soon as you do the difference becomes apparent very quickly.

Some CPUs have efficiency cores for background processes or OS, whatever that means.

This is "we turn off the main engine at the stop light, but run a small auxiliary engine so the A/C doesn't turn off at the same time", but for computers, with the side effect that said A/C no longer bleeds power from the main engine. Those cores are designed to handle background tasks so the foreground cores spend less time switching away to deal with them. It doesn't make the machine appreciably faster, though.