Are computers getting faster? ... comment

Dale Mahalko

New Member
Jul 25, 2019
28
4
3
(This is sorta off-topic here as it involves Pentium and multi-core (gasp) but is a comment on a past 8-bit guy video.)

Regarding this video from 2016:

Generally what has had the most impact on making computers live longer is the thermal limit that Intel hit back in about 2005 with the Pentium 4, and the fact that they were forced to step back from single CPU systems and change their business plan over to parallel processing on multiple cores.

For all the hype that multi-core processors have in the modern world, the fact remains that most general purpose software is still really only single-threaded.

It is just plain hard work to design general purpose programs that can spread their work across multiple independent CPU cores. There are all sorts of problems to deal with, such as one core needing data from another core, and being forced to stop and wait for the other core to finish what it is doing so that the second can continue working.

These inter-process delays and waiting (known as a "race condition" in programming) can drain all the advantage out of try to split work across multiple cores, and many generic programs just don't even bother trying.

The main place where multiple cores help is with modern multitasking systems where there are a bunch of programs running all at the same time. Each program may itself still only effectively use one CPU core, but at least the operating system kernel can spread out the different single-core programs so that their combined single-threaded loads are using each individual core.

Now yes there are programs that do benefit from multiple cores, but these tend to be specific use cases where absolute performance is a must, such as with 3D games or heavy number crunching like Photoshop image filters.

So even for all the hype from Intel about multi-core systems, underneath it all, most software still only really effectively uses one core at a time, and this then forces the computer to have a base level of performance that for everyday use will rarely exceed the maximum performance of 1-2 cores.

Intel has renamed their product lines over the years, so essentially a Core 2 Duo is the same thing as a Core-i3 with two cores, and a Pentium M (mobile edition) is effective a Core 2 Single (Uno?)..

In this regard, Intel's product line has been effectively stagnant for at least a 10-15 years since 2005, with a decade old Core 2 Duo 3.0 ghz having pretty much the same single core performance as a modern Core i5 3.0 ghz.

The real performance differences between the past and current systems has had more to do with the motherboard architecture, and the change from DDR2 to DDR3 to DDR4, and from PCI-Express 1.0, to 2.0, to 3.0, to 4.0. This has resulted in significant performance increases just by reducing the amount of waiting that the CPU has to do, to wait for data to be passed over slow communications buses.



Also, the rise of the solid state drive and NVMe allows extremely fast solid state drives to bypass the slow SATA controllers and mechanical hard drives of older motherboards.

By performing some advanced system trickery with the Clover Hackintosh bootloader, it is possible to boot a 10+ year old BIOS-only motherboard with Windows 10 64-bit, using the Clover UEFI in RAM booted from USB, and using an NVMe SSD plugged into a 16 lane PCIe graphics card slot as the boot drive, and achieve nearly identical Intel dual-core performance to a modern system at the same Ghz rating, that is using a SATA SSD.

Though I will admit that this is some serious gnostic arcanary that is probably really only useful to prove the point rather than be used as a day-to-day computer. :D

I will make a video about this eventually to demonstrate it.
 
Last edited:
May 22, 2019
229
108
43
Yes, computers absolutely are getting faster... but every advance in hardware power seems to be offset by increased demands from the operating system and application software.

A modern version of WIndows will not even run on a computer made in 2000, but if you went the other way and loaded WIndows 2000 on a new PC, it would boot so fast you'd miss it if you blinked.

We all started installing SSDs over the last few years to speed up load times... and I've noticed modern video games taking longer and longer to load. Even with an SSD, the latest games take longer to load than a game from 2-3 years ago on a hard drive. Why? Because programmers figure the hardware will absorb the extra overhead.

And, of course, our ability as Human beings to process information hasn't changed. We can't read or type faster than we did 20 years ago, so our ability to work hasn't really change don a modern PC compared to one from 2000.

What has changed is the amount of data we can store, how we can process large amounts of information, and how quickly we can move that information around. Raw data processing capacity has increased by orders of magnitude over the last few years, and when working on that scale, things are just night and day different.

I don't think the desktop user experience is getting significantly faster, and it my actually be slowing down, but are computers are becoming more capable - and that's largely due, as you already stated, to miniaturization and parrallelization, not so much raw clock speeds.
 
May 22, 2019
229
108
43
Huh. I didn't know there was actually a Core 2 Solo. That's crazy.


I mostly skipped the Core 2 phase, as I was underemployed at the time, and I went straight from an Athlon X2 to an Athlon 64-bit, then to an i7 once things got better.
 

VioletGiraffe

New Member
Sep 14, 2019
1
0
1
For all the hype that multi-core processors have in the modern world, the fact remains that most general purpose software is still really only single-threaded.
That is almost absolutely false. Any software that runs a moderately lengthy task, especially if the task is computational (rather than I/O bound), is parallelized at least to some extent. With varying efficiency, of course, based on how much the task lends itself to parallelization. I challenge you to open your favorite task/activity manager software and count the number of processes that only have a single thread. That number will be around 2% or less of the total number of processes.
It is indeed hard work writing efficient parallel software, as you've described quite well, and having N threads rarely speeds the program up N times, but to say that most of the software will run as fast on a single core as it does on, say, 4 cores is simply false.

In this regard, Intel's product line has been effectively stagnant for at least a 10-15 years since 2005, with a decade old Core 2 Duo 3.0 ghz having pretty much the same single core performance as a modern Core i5 3.0 ghz.
I'm happy to say that is also false. Clock for clock perofrmance has increased significantly since Core 2, over 2x by now, and the clock speeds are also about 33% higher (4 GHz versus 3 GHz, not to mention the Extreme Edition CPUs of the Core 2 era were barely reaching 3.0 Ghz mark out of the box, although they could be overclocked past 3.0).
Just take a look at these tests:

(Anandtech had more similar charts with more recent CPU generations but I couldn't find them right away).

Yes, computers absolutely are getting faster... but every advance in hardware power seems to be offset by increased demands from the operating system and application software.
And that is a bad thing how? We need faster hardware precisely to enable more complex software, so that more useful work can be done in the same amount of time. I'd rather run a modern game with next-gen graphics at 60 FPS than a game with 2005 graphics at 300 FPS.