site banner

Small-Scale Question Sunday for March 19, 2023

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

3
Jump in the discussion.

No email address required.

How do you tell whether a CPU is better or worse (other than whether it's more or less expensive)?

With graphics cards, I know that there are more or less CUDA cores or their AMD equivalent, that there's a certain amount of video ram. More is better.

But with cpus, what is there? More cores = more expensive but most applications only seem to use 1 core so what's the point? All CPUs are roughly 3.5 Ghz, maybe going a bit higher on the most expensive models or if you overclock them. I heard that some CPUs manage to get more done in their herz, like the difference between a lamborghini driving a hundred km but only taking two people, vs a truck carrying a dozen people somewhat more slowly. Some CPUs have efficiency cores for background processes or OS, whatever that means. I get that higher numbers means that they're better but how are they better?

Then theres the instructions per clock and the cache and yada yada.

Just use benchmarks of what you plan on doing if you dont plan on getting a computer engineering degree anytime soon.

But if you want a dumb heuristic, then just multiply IPC, n_cores and clkc_freq

most applications only seem to use 1 core

That depends on which applications you use.

Buy a CPU with more cores?

The majority of programs (games included) are lightly threaded, which means they only need a few cores to run. Games released as late as 2013 could easily run on just one core. For a long time, having more than a few (2–4) cores brought no benefits to the overwhelming majority of games.

But the times are changing, and more and more modern games are beginning to take advantage of the extra cores and threads of modern CPUs. As of early 2020, the best CPUs for modern gaming are 8-core CPUs! In the future, if the core count for "best gaming CPU" changes, it will go up, not down.

There is still a lot of truth to the old wisdom though; in particular, single-core and single-threaded performance are still the best determinants of a CPU’s performance in almost every game.

Non-gaming programs that may benefit from higher core counts include file compression, video encoding, 3D rendering, and server applications. If you are going to be using your computer for any of those sorts of tasks, then you may benefit from a more expensive CPU. Otherwise stick with a mid-range CPU, unless money is not a concern.


I heard that some CPUs manage to get more done in their hertz

The technical term is "instructions per cycle". Newer generations of CPUs will have better IPC scores.

All CPUs are roughly 3.5 Ghz

That hasn't been true for a long time now, at least as far as desktops are concerned. That whole "base clock/boost clock" thing is just marketing; if the CPU's "boost" clock is running all cores at 4.5GHz, provided you've kept it cool enough it will run 4.5GHz with the same lifespan you'd expect from a CPU that does not boost (and if you fail to keep it cool, it will fall below its base clock to protect itself). It's not "overclocking" if it's listed on the nameplate (and overclocking is all but dead these days for that reason).

It's slightly misleading that those "maximum boost" clocks are talking about how fast a single core can be run and you have to dig for the all-core "boost" clock to get how fast that CPU will be running most of the time. AMD's fastest part can do 5.1GHz on all cores with a single core on the die able to reach 5.7GHz; Intel will have a part that can run a single core at 6 GHz (no word on all-core, but it'll probably be 5.2 or so up from their current 4.8).

The "base clock" number still does have a use, though, but that's more for predicting the performance of pre-built machines whose builders failed to give them adequate cooling- so even with a stock cooling solution that can only dissipate 120W that's the lowest constant speed you can expect (they'll go to maximum boost until they throttle back to avoid overheating). Both Intel and AMD's CPUs pull twice their rated limits at maximum.

How do you tell whether a CPU is better or worse (other than whether it's more or less expensive)?

Generally speaking, its generation and its maximum all-core speed tell the vast majority of the difference you care about. There are a few complicating factors (especially in Intel's case) but as a general rule of thumb, a CPU from the same market segment but one generation previous at the same clock speed runs 10% slower than a CPU from the current generation. Some generations have much larger leaps (Intel 11th to 12th, AMD Piledriver to Zen), and there are application-specific things that can complicate this (like AMD's X3D models and gaming) but on average that's what you can expect.

Usually I just read the benchmarks (Anandtech, Tom's Hardware, Phoronix, and LTT) in real-world-ish tasks, like "how long does it take to compile Chrome", and draw my conclusions on how much faster the newer generation is based on that. But you're not going to notice the difference between a CPU from 2008 and a CPU from 2018 until you ask them to do something interesting (provided both machines are using the same SSD)- as soon as you do the difference becomes apparent very quickly.

Some CPUs have efficiency cores for background processes or OS, whatever that means.

This is "we turn off the main engine at the stop light, but run a small auxiliary engine so the A/C doesn't turn off at the same time", but for computers, with the side effect that said A/C no longer bleeds power from the main engine. Those cores are designed to handle background tasks so the foreground cores spend less time switching away to deal with them. It doesn't make the machine appreciably faster, though.

I don't know of a good way to work it a priori. I guess chip designers must have a way to be reasonably sure before they make the chips that they'll be faster than their predecessors.

But generally you'll want to use benchmarks, like https://www.cpubenchmark.net/. Benchmarking software runs a computationally expensive test, which is ideally somewhat representative of real-world workloads, to see how a given CPU performs. This is complicated by systems with the same CPU having different other components, but I assume they adjust for that somehow.